title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 11. GNOME Shell Extensions
|
Chapter 11. GNOME Shell Extensions This chapter introduces system-wide configuration of GNOME Shell Extensions. You will learn how to view the extensions, how to enable them, how to lock a list of enabled extensions or how to set up several extensions as mandatory for the users of the system. You will be using dconf when configuring GNOME Shell Extensions, setting the following two GSettings keys: org.gnome.shell.enabled-extensions org.gnome.shell.development-tools For more information on dconf and GSettings , see Chapter 9, Configuring Desktop with GSettings and dconf . 11.1. What Are GNOME Shell Extensions? GNOME Shell extensions allow for the customization of the default GNOME Shell interface and its parts, such as window management and application launching. Each GNOME Shell extension is identified by a unique identifier, the uuid. The uuid is also used for the name of the directory where an extension is installed. You can either install the extension per-user in ~/.local/share/gnome-shell/extensions/ uuid , or machine-wide in /usr/share/gnome-shell/extensions/ uuid . The uuid identifier is globally-unique. When choosing it, remember that the uuid must possess the following properties to prevent certain attacks: Your uuid must not contain Unicode characters. Your uuid must not contain the gnome.org ending as it must not appear to be affiliated with the GNOME Project. Your uuid must contain only alphanumerical characters, the period (.), the at symbol (@), and the underscore (_). Important Before deploying third-party GNOME Shell extensions on Red Hat Enterprise Linux, make sure to read the following document to learn about the Red Hat support policy for third-party software: How does Red Hat Global Support Services handle third-party software, drivers, and/or uncertified hardware/hypervisors? To view installed extensions, you can use Looking Glass , GNOME Shell's integrated debugger and inspector tool. Procedure 11.1. View installed extensions Press Alt + F2 . Type in lg and press Enter to open Looking Glass . On the top bar of Looking Glass , click Extensions to open the list of installed extensions. Figure 11.1. Viewing Installed extensions with Looking Glass
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/gnome-shell-extensions
|
Chapter 52. Enabling AD users to administer IdM
|
Chapter 52. Enabling AD users to administer IdM 52.1. ID overrides for AD users In Red Hat Enterprise Linux (RHEL) 7, external group membership allows Active Directory (AD) users and groups to access Identity Management (IdM) resources in a POSIX environment with the help of the System Security Services Daemon (SSSD). The IdM LDAP server has its own mechanisms to grant access control. RHEL 8 introduces an update that allows adding an ID user override for an AD user as a member of an IdM group. An ID override is a record describing what a specific Active Directory user or group properties should look like within a specific ID view, in this case the Default Trust View . As a consequence of the update, the IdM LDAP server is able to apply access control rules for the IdM group to the AD user. AD users are now able to use the self service features of IdM UI, for example to upload their SSH keys, or change their personal data. An AD administrator is able to fully administer IdM without having two different accounts and passwords. Note Currently, selected features in IdM may still be unavailable to AD users. For example, setting passwords for IdM users as an AD user from the IdM admins group might fail. Important Do not use ID overrides of AD users for sudo rules in IdM. ID overrides of AD users represent only POSIX attributes of AD users, not AD users themselves. Additional resources Using ID views for Active Directory users 52.2. Using ID overrides to enable AD users to administer IdM Follow this procedure to create and use an ID override for an AD user to give that user rights identical to those of an IdM user. During this procedure, work on an IdM server that is configured as a trust controller or a trust agent. Prerequisites The idm:DL1 stream is enabled on your Identity Management (IdM) server and you have switched to the RPMs delivered through this stream: The idm:DL1/adtrust profile is installed on your IdM server. The profile contains all the packages necessary for installing an IdM server that will have a trust agreement with Active Directory (AD). A working IdM environment is set up. For details, see Installing Identity Management . A working trust between your IdM environment and AD is set up. Procedure As an IdM administrator, create an ID override for an AD user in the Default Trust View . For example, to create an ID override for the user [email protected] : Add the ID override from the Default Trust View as a member of an IdM group. This must be a non-POSIX group, as it interacts with Active Directory. If the group in question is a member of an IdM role, the AD user represented by the ID override gains all permissions granted by the role when using the IdM API, including both the command line and the IdM web UI. For example, to add the ID override for the [email protected] user to the IdM admins group: Alternatively, you can add the ID override to a role, such as the User Administrator role: Additional resources Using ID views for Active Directory users 52.3. Using Ansible to enable AD users to administer IdM Follow this procedure to use an Ansible playbook to ensure that a user ID override is present in an Identity Management (IdM) group. The user ID override is the override of an Active Directory (AD) user that you created in the Default Trust View after you established a trust with AD. As a result of running the playbook, an AD user, for example an AD administrator, is able to fully administer IdM without having two different accounts and passwords. Prerequisites You know the IdM admin password. You have installed a trust with AD . The user ID override of the AD user already exists in IdM. If it does not, create it with the ipa idoverrideuser-add 'default trust view' [email protected] command. The group to which you are adding the user ID override already exists in IdM . You are using the 4.8.7 version of IdM or later. To view the version of IdM you have installed on your server, enter ipa --version . You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Create an add-useridoverride-to-group.yml playbook with the following content: In the example: Secret123 is the IdM admin password. admins is the name of the IdM POSIX group to which you are adding the [email protected] ID override. Members of this group have full administrator privileges. [email protected] is the user ID override of an AD administrator. The user is stored in the AD domain with which a trust has been established. Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources ID overrides for AD users /usr/share/doc/ansible-freeipa/README-group.md /usr/share/doc/ansible-freeipa/playbooks/user Using ID views in Active Directory environments 52.4. Verifying that an AD user can perform correct commands in the IdM CLI This procedure checks that an Active Directory (AD) user can log into Identity Management (IdM) command-line interface (CLI) and run commands appropriate for his role. Destroy the current Kerberos ticket of the IdM administrator: Note The destruction of the Kerberos ticket is required because the GSSAPI implementation in MIT Kerberos chooses credentials from the realm of the target service by preference, which in this case is the IdM realm. This means that if a credentials cache collection, namely the KCM: , KEYRING: , or DIR: type of credentials cache is in use, a previously obtained admin or any other IdM principal's credentials will be used to access the IdM API instead of the AD user's credentials. Obtain the Kerberos credentials of the AD user for whom an ID override has been created: Test that the ID override of the AD user enjoys the same privileges stemming from membership in the IdM group as any IdM user in that group. If the ID override of the AD user has been added to the admins group, the AD user can, for example, create groups in IdM: 52.5. Using Ansible to enable an AD user to administer IdM You can use the ansible-freeipa idoverrideuser and group modules to create a user ID override for an Active Directory (AD) user from a trusted AD domain and give that user rights identical to those of an IdM user. The procedure uses the example of the Default Trust View ID view to which the [email protected] ID override is added in the first playbook task. In the playbook task, the [email protected] ID override is added to the IdM admins group as a member. As a result, an AD administrator can administer IdM without having two different accounts and passwords. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package on the Ansible controller. You are using RHEL 8.10 or later. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The AD forest is in trust with IdM. In the example, the name of the AD domain is addomain.com and the fully-qualified domain name (FQDN) of the AD administrator is [email protected] . The ipaserver host in the inventory file is configured as a trust controller or a trust agent. The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure On your Ansible control node, create an enable-ad-admin-to-administer-idm.yml playbook with a task to add the [email protected] user override to the Default Trust View: Use another playbook task in the same playbook to add the AD administrator user ID override to the admins group: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Verification Log in to the IdM client as the AD Administrator: Verify that you have obtained a valid ticket-granting ticket (TGT): Verify your admin privileges in IdM: Additional resources The idoverrideuser and ipagroup ansible-freeipa upstream documentation Enabling AD users to administer IdM
|
[
"yum module enable idm:DL1 yum distro-sync",
"yum module install idm:DL1/adtrust",
"kinit admin ipa idoverrideuser-add 'default trust view' [email protected]",
"ipa group-add-member admins [email protected]",
"ipa role-add-member 'User Administrator' [email protected]",
"cd ~/ MyPlaybooks /",
"--- - name: Playbook to ensure presence of users in a group hosts: ipaserver - name: Ensure the [email protected] user ID override is a member of the admins group: ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: admins idoverrideuser: - [email protected]",
"ansible-playbook --vault-password-file=password_file -v -i inventory add-useridoverride-to-group.yml",
"kdestroy -A",
"kinit [email protected] Password for [email protected]:",
"ipa group-add some-new-group ---------------------------- Added group \"some-new-group\" ---------------------------- Group name: some-new-group GID: 1997000011",
"--- - name: Enable AD administrator to act as a FreeIPA admin hosts: ipaserver become: false gather_facts: false tasks: - name: Ensure idoverride for [email protected] in 'default trust view' ipaidoverrideuser: ipaadmin_password: \"{{ ipaadmin_password }}\" idview: \"Default Trust View\" anchor: [email protected]",
"- name: Add the AD administrator as a member of admins ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: admins idoverrideuser: - [email protected]",
"ansible-playbook --vault-password-file=password_file -v -i inventory enable-ad-admin-to-administer-idm.yml",
"ssh [email protected]@client.idm.example.com",
"klist Ticket cache: KCM:325600500:99540 Default principal: [email protected] Valid starting Expires Service principal 02/04/2024 11:54:16 02/04/2024 21:54:16 krbtgt/[email protected] renew until 02/05/2024 11:54:16",
"ipa user-add testuser --first=test --last=user ------------------------ Added user \"tuser\" ------------------------ User login: tuser First name: test Last name: user Full name: test user [...]"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_idm_users_groups_hosts_and_access_control_rules/enabling-ad-users-to-administer-idm_managing-users-groups-hosts
|
Appendix C. Virtualization Restrictions
|
Appendix C. Virtualization Restrictions This appendix covers additional support and product restrictions of the virtualization packages in Red Hat Enterprise Linux 7. C.1. System Restrictions Host Systems Red Hat Enterprise Linux with KVM is supported only on the following host architectures: AMD64 and Intel 64 IBM Z IBM POWER8 IBM POWER9 This document primarily describes AMD64 and Intel 64 features and functionalities, but the other supported architectures work very similarly. For details, see Appendix B, Using KVM Virtualization on Multiple Architectures . Guest Systems On Red Hat Enterprise Linux 7, Microsoft Windows guest virtual machines are only supported under specific subscription programs such as Advanced Mission Critical (AMC). If you are unsure whether your subscription model includes support for Windows guests, contact customer support. For more information about Windows guest virtual machines on Red Hat Enterprise Linux 7, see Windows Guest Virtual Machines on Red Hat Enterprise Linux 7 Knowledgebase article .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/appe-Virtualization_restrictions
|
probe::ioscheduler.elv_next_request
|
probe::ioscheduler.elv_next_request Name probe::ioscheduler.elv_next_request - Fires when a request is retrieved from the request queue Synopsis Values name Name of the probe point elevator_name The type of I/O elevator currently enabled
|
[
"ioscheduler.elv_next_request"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-ioscheduler-elv-next-request
|
Chapter 5. Message Addresses and Queues
|
Chapter 5. Message Addresses and Queues AMQ 7 introduces a new, flexible addressing model that enables you to define standard messaging patterns that work for any messaging protocol. Therefore, the process for configuring queues and topic-like behavior has changed significantly. 5.1. Addressing Changes AMQ 6 implemented JMS concepts such as queues, topics, and durable subscriptions as directly-configurable destinations. Example: Default Queue and Topic Configuration in AMQ 6 <destinations> <queue physicalName="my-queue" /> <topic physicalName="my-topic" /> </destinations> AMQ Broker 7 uses addresses, routing types, and queues to achieve queue and topic-like behavior. An address represents a messaging endpoint. Queues are associated with addresses. A routing type defines how messages are distributed to the queues associated with an address. There are two routing types: Anycast distributes messages to a single queue within the matching address, and Multicast distributes messages to every queue associated with the address. By associating queues with addresses and routing types, you can implement a variety of messaging patterns, such as point-to-point (queues) and publish-subscribe (topic-like). Example: Point-to-Point Address Configuration in AMQ Broker 7 In this example, when the broker receives a message on address.foo , the message will be routed to my-queue . If multiple anycast queues are associated with the address, the messages are distributed evenly across the queues. <address name="address.foo"> <anycast> <queue name="my-queue"/> </anycast> </address> Example: Publish-Subscribe Address Configuration in AMQ Broker 7 In this example, when the broker receives a message on topic.foo , a copy of the message will be routed to both my-topic-1 and my-topic-2 . <address name="topic.foo"> <multicast> <queue name="my-topic-1"/> <queue name="my-topic-2"/> </multicast> </address> Related Information For full details about the addressing model in AMQ Broker 7, see Configuring addresses and queues in Configuring AMQ Broker . 5.2. How Addressing is Configured You use the BROKER_INSTANCE_DIR /etc/broker.xml configuration file to configure addresses and queues for your broker instance. The broker.xml configuration file contains the following default addressing configuration in the <addresses> section. There are default entries for the Dead Letter Queue ( DLQ ) and Expiry Queue ( ExpiryQueue ): <addresses> <address name="DLQ"> <anycast> <queue name="DLQ" /> </anycast> </address> <address name="ExpiryQueue"> <anycast> <queue name="ExpiryQueue" /> </anycast> </address> </addresses> You can configure addressing for your broker instance by using any of the following methods: Method Description Manually configure an address You define the routing types and queues that the broker should use when receiving a message on the address. You can configure an address in the following ways: Configuring basic point-to-point messaging in Configuring AMQ Broker Configuring point-to-point messaging for multiple queues in Configuring AMQ Broker Configuring addresses for publish-subscribe messaging in Configuring AMQ Broker Configuring an address for both point-to-point and publish-subscribe messaging in Configuring AMQ Broker Configuring subscription queues in Configuring AMQ Broker Configure the broker to create addresses automatically You specify an address prefix and routing type for which addresses you want to be created automatically. When the broker receives a message on an address that matches the prefix, the address and routing type will be created automatically. You can also specify that the address be deleted automatically when all of its queues have been deleted, and that its queues be deleted automatically when they have no consumers or messages. For more information, see Creating and deleting addresses and queues automatically in Configuring AMQ Broker .
|
[
"<destinations> <queue physicalName=\"my-queue\" /> <topic physicalName=\"my-topic\" /> </destinations>",
"<address name=\"address.foo\"> <anycast> <queue name=\"my-queue\"/> </anycast> </address>",
"<address name=\"topic.foo\"> <multicast> <queue name=\"my-topic-1\"/> <queue name=\"my-topic-2\"/> </multicast> </address>",
"<addresses> <address name=\"DLQ\"> <anycast> <queue name=\"DLQ\" /> </anycast> </address> <address name=\"ExpiryQueue\"> <anycast> <queue name=\"ExpiryQueue\" /> </anycast> </address> </addresses>"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/migrating_to_red_hat_amq_7/message_addresses_queues
|
8.9. Accelerated RFS
|
8.9. Accelerated RFS Accelerated RFS boosts the speed of RFS by adding hardware assistance. Like RFS, packets are forwarded based on the location of the application consuming the packet. Unlike traditional RFS, however, packets are sent directly to a CPU that is local to the thread consuming the data: either the CPU that is executing the application, or a CPU local to that CPU in the cache hierarchy. Accelerated RFS is only available if the following conditions are met: Accelerated RFS must be supported by the network interface card. Accelerated RFS is supported by cards that export the ndo_rx_flow_steer() netdevice function. ntuple filtering must be enabled. Once these conditions are met, CPU to queue mapping is deduced automatically based on traditional RFS configuration. That is, CPU to queue mapping is deduced based on the IRQ affinities configured by the driver for each receive queue. Refer to Section 8.8, "Receive Flow Steering (RFS)" for details on configuring traditional RFS. Red Hat recommends using accelerated RFS wherever using RFS is appropriate and the network interface card supports hardware acceleration.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/network-acc-rfs
|
Install
|
Install Red Hat Advanced Cluster Management for Kubernetes 2.11 Installation
|
[
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: BackupSchedule metadata: name:schedule-acm namespace:open-cluster-management-backup spec: veleroSchedule:0 */1 * * * veleroTtl:120h",
"-n openshift-console get route",
"openshift-console console console-openshift-console.apps.new-coral.purple-chesterfield.com console https reencrypt/Redirect None",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace>",
"create namespace <namespace>",
"project <namespace>",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <default> namespace: <namespace> spec: targetNamespaces: - <namespace>",
"apply -f <path-to-file>/<operator-group>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: acm-operator-subscription spec: sourceNamespace: openshift-marketplace source: redhat-operators channel: release-2.x installPlanApproval: Automatic name: advanced-cluster-management",
"apply -f <path-to-file>/<subscription>.yaml",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: {}",
"apply -f <path-to-file>/<custom-resource>.yaml",
"error: unable to recognize \"./mch.yaml\": no matches for kind \"MultiClusterHub\" in version \"operator.open-cluster-management.io/v1\"",
"get mch -o=jsonpath='{.items[0].status.phase}'",
"metadata: labels: node-role.kubernetes.io/infra: \"\" spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/infra",
"spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra effect: NoSchedule operator: Exists",
"spec: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"openshift-console console console-openshift-console.apps.new-coral.purple-chesterfield.com console https reencrypt/Redirect None",
"opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v4.x -p advanced-cluster-management,multicluster-engine -t myregistry.example.com:5000/mirror/my-operator-index:v4.x",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: registry: imageURL: myregistry.example.com:5000/mirror/oc-mirror-metadata mirror: platform: channels: - name: stable-4.x type: ocp operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.11 packages: - name: advanced-cluster-management - name: multicluster-engine additionalImages: [] helm: {}",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-mirror-catalog-source namespace: openshift-marketplace spec: image: myregistry.example.com:5000/mirror/my-operator-index:v4.x sourceType: grpc",
"-n openshift-marketplace get packagemanifests",
"replace -f ./<path>/imageContentSourcePolicy.yaml",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: labels: operators.openshift.org/catalog: \"true\" name: operator-0 spec: repositoryDigestMirrors: - mirrors: - myregistry.example.com:5000/rhacm2 source: registry.redhat.io/rhacm2 - mirrors: - myregistry.example.com:5000/multicluster-engine source: registry.redhat.io/multicluster-engine - mirrors: - myregistry.example.com:5000/openshift4 source: registry.redhat.io/openshift4 - mirrors: - myregistry.example.com:5000/redhat source: registry.redhat.io/redhat",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: namespace: open-cluster-management name: hub annotations: installer.open-cluster-management.io/mce-subscription-spec: '{\"source\": \"my-mirror-catalog-source\"}' spec: {}",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> 1 spec: overrides: components: - name: <name> 2 enabled: true",
"patch MultiClusterHub multiclusterhub -n <namespace> --type=json -p='[{\"op\": \"add\", \"path\": \"/spec/overrides/components/-\",\"value\":{\"name\":\"<name>\",\"enabled\":true}}]'",
"create secret generic <secret> -n <namespace> --from-file=.dockerconfigjson=<path-to-pull-secret> --type=kubernetes.io/dockerconfigjson",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: imagePullSecret: <secret>",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: availabilityConfig: \"Basic\"",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: tolerations: - key: node-role.kubernetes.io/infra effect: NoSchedule operator: Exists",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: disableHubSelfManagement: true",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: disableUpdateClusterImageSets: true",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: customCAConfigmap: <configmap>",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: ingress: sslCiphers: - \"ECDHE-ECDSA-AES128-GCM-SHA256\" - \"ECDHE-RSA-AES128-GCM-SHA256\"",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: <namespace> spec: overrides: components: - name: cluster-backup enabled: true",
"patch MultiClusterHub multiclusterhub -n <namespace> --type=json -p='[{\"op\": \"add\", \"path\": \"/spec/overrides/components/-\",\"value\":{\"name\":\"cluster-backup\",\"enabled\":true}}]'",
"get mch",
"metadata: annotations: installer.open-cluster-management.io/mce-subscription-spec: '{\"source\": \"<my-mirror-catalog-source>\"}'",
"Cannot delete MultiClusterHub resource because DiscoveryConfig resource(s) exist",
"delete discoveryconfigs --all --all-namespaces",
"Cannot delete MultiClusterHub resource because AgentServiceConfig resource(s) exist",
"delete agentserviceconfig --all",
"Cannot delete MultiClusterHub resource because ManagedCluster resource(s) exist",
"Cannot delete MultiClusterHub resource because MultiClusterObservability resource(s) exist",
"delete mco observability",
"project <namespace>",
"delete multiclusterhub --all",
"get mch -o yaml",
"#!/bin/bash ACM_NAMESPACE=<namespace> delete mch --all -n USDACM_NAMESPACE delete apiservice v1.admission.cluster.open-cluster-management.io v1.admission.work.open-cluster-management.io delete clusterimageset --all delete clusterrole multiclusterengines.multicluster.openshift.io-v1-admin multiclusterengines.multicluster.openshift.io-v1-crdview multiclusterengines.multicluster.openshift.io-v1-edit multiclusterengines.multicluster.openshift.io-v1-view open-cluster-management:addons:application-manager open-cluster-management:admin-aggregate open-cluster-management:cert-policy-controller-hub open-cluster-management:cluster-manager-admin-aggregate open-cluster-management:config-policy-controller-hub open-cluster-management:edit-aggregate open-cluster-management:policy-framework-hub open-cluster-management:view-aggregate delete crd klusterletaddonconfigs.agent.open-cluster-management.io placementbindings.policy.open-cluster-management.io policies.policy.open-cluster-management.io userpreferences.console.open-cluster-management.io discoveredclusters.discovery.open-cluster-management.io discoveryconfigs.discovery.open-cluster-management.io delete mutatingwebhookconfiguration ocm-mutating-webhook managedclustermutators.admission.cluster.open-cluster-management.io multicluster-observability-operator delete validatingwebhookconfiguration channels.apps.open.cluster.management.webhook.validator application-webhook-validator multiclusterhub-operator-validating-webhook ocm-validating-webhook multicluster-observability-operator multiclusterengines.multicluster.openshift.io",
"get csv NAME DISPLAY VERSION REPLACES PHASE advanced-cluster-management.v2.x.0 Advanced Cluster Management for Kubernetes 2.x.0 Succeeded delete clusterserviceversion advanced-cluster-management.v2.x.0 get sub NAME PACKAGE SOURCE CHANNEL acm-operator-subscription advanced-cluster-management acm-custom-registry release-2.x delete sub acm-operator-subscription"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html-single/install/index
|
Chapter 17. Upgrading to OpenShift Data Foundation
|
Chapter 17. Upgrading to OpenShift Data Foundation 17.1. Overview of the OpenShift Data Foundation update process This chapter helps you to upgrade between the minor releases and z-streams for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. You can upgrade OpenShift Data Foundation and its components, either between minor releases like 4.14 and 4.15, or between z-stream updates like 4.15.0 and 4.15.1 by enabling automatic updates (if not done so during operator installation) or performing manual updates. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic. You also need to upgrade the different parts of Red Hat OpenShift Data Foundation in the following order for both internal and external mode deployments: Update OpenShift Container Platform according to the Updating clusters documentation for OpenShift Container Platform. Update Red Hat OpenShift Data Foundation. To prepare a disconnected environment for updates , see Operators guide to using Operator Lifecycle Manager on restricted networks to be able to update OpenShift Data Foundation as well as Local Storage Operator when in use. For updating between minor releases , see Updating Red Hat OpenShift Data Foundation 4.14 to 4.15 . For updating between z-stream releases , see Updating Red Hat OpenShift Data Foundation 4.15.x to 4.15.y . For updating external mode deployments , you must also perform the steps from section Updating the Red Hat OpenShift Data Foundation external secret . If you use local storage, then update the Local Storage operator . See Checking for Local Storage Operator deployments if you are unsure. Important If you have an existing setup of OpenShift Data Foundation 4.12 with disaster recovery (DR) enabled, ensure to update all your clusters in the environment at the same time and avoid updating a single cluster. This is to avoid any potential issues and maintain best compatibility. It is also important to maintain consistency across all OpenShift Data Foundation DR instances. Update considerations Review the following important considerations before you begin. The Red Hat OpenShift Container Platform version is the same as Red Hat OpenShift Data Foundation. See the Interoperability Matrix for more information about supported combinations of OpenShift Container Platform and Red Hat OpenShift Data Foundation. To know whether your cluster was deployed in internal or external mode, refer to the knowledgebase article on How to determine if ODF cluster has storage in internal or external mode . The Local Storage Operator is fully supported only when the Local Storage Operator version matches the Red Hat OpenShift Container Platform version. The flexible scaling feature is available only in new deployments of OpenShift Data Foundation. For more information, see Scaling storage guide . 17.2. Updating Red Hat OpenShift Data Foundation 4.14 to 4.15 This chapter helps you to upgrade between the minor releases for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Red Hat Ceph Storage (RHCS) cluster. For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. We recommend upgrading RHCS along with OpenShift Data Foundation in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about RHCS releases. Important Upgrading to 4.15 directly from any version older than 4.14 is unsupported. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.15.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of both Overview - Block and File and Object tabs. Green tick indicates that the storage cluster , object service and data resiliency are all healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Procedure On the OpenShift Web Console, navigate to Operators Installed Operators . Select openshift-storage project. Click the OpenShift Data Foundation operator name. Click the Subscription tab and click the link under Update Channel . Select the stable-4.15 update channel and Save it. If the Upgrade status shows requires approval , click on requires approval . On the Install Plan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . Navigate to Operators Installed Operators . Select the openshift-storage project. Wait for the OpenShift Data Foundation Operator Status to change to Up to date . After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. Note After upgrading, if your cluster has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps Check the Version below the OpenShift Data Foundation name and check the operator status. Navigate to Operators Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview- Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. If verification steps fail, contact Red Hat Support . Important After updating external mode deployments, you must also update the external secret. For instructions, see Updating the OpenShift Data Foundation external secret . Additional Resources If you face any issues while updating OpenShift Data Foundation, see the Commonly required logs for troubleshooting section in the Troubleshooting guide . 17.3. Updating Red Hat OpenShift Data Foundation 4.15.x to 4.15.y This chapter helps you to upgrade between the z-stream release for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Red Hat Ceph Storage (RHCS) cluster. For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. Hence, we recommend upgrading RHCS along with OpenShift Data Foundation in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about RHCS releases. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic . If the update strategy is set to Manual then use the following procedure. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.15.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Procedure On the OpenShift Web Console, navigate to Operators Installed Operators . Select openshift-storage project. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click the OpenShift Data Foundation operator name. Click the Subscription tab. If the Upgrade Status shows require approval , click on requires approval link. On the InstallPlan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. Verification steps Check the Version below the OpenShift Data Foundation name and check the operator status. Navigate to Operators Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy If verification steps fail, contact Red Hat Support . 17.4. Changing the update approval strategy To ensure that the storage system gets updated automatically when a new update is available in the same channel, we recommend keeping the update approval strategy to Automatic . Changing the update approval strategy to Manual will need manual approval for each upgrade. Procedure Navigate to Operators Installed Operators . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click on OpenShift Data Foundation operator name Go to the Subscription tab. Click on the pencil icon for changing the Update approval . Select the update approval strategy and click Save . Verification steps Verify that the Update approval shows the newly selected approval strategy below it.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/upgrading-your-cluster_osp
|
Chapter 5. Managing Red Hat High Availability Add-On With Conga
|
Chapter 5. Managing Red Hat High Availability Add-On With Conga This chapter describes various administrative tasks for managing Red Hat High Availability Add-On and consists of the following sections: Section 5.1, "Adding an Existing Cluster to the luci Interface" Section 5.2, "Removing a Cluster from the luci Interface" Section 5.3, "Managing Cluster Nodes" Section 5.4, "Starting, Stopping, Restarting, and Deleting Clusters" Section 5.5, "Managing High-Availability Services" Section 5.6, "Backing Up and Restoring the luci Configuration" 5.1. Adding an Existing Cluster to the luci Interface If you have previously created a High Availability Add-On cluster you can easily add the cluster to the luci interface so that you can manage the cluster with Conga . To add an existing cluster to the luci interface, follow these steps: Click Manage Clusters from the menu on the left side of the luci Homebase page. The Clusters screen appears. Click Add . The Add Existing Cluster screen appears. Enter the node host name for any of the nodes in the existing cluster. After you have entered the node name, the node name is reused as the ricci host name; you can override this if you are communicating with ricci on an address that is different from the address to which the cluster node name resolves. As of Red Hat Enterprise Linux 6.9, after you have entered the node name and ricci host name, the fingerprint of the certificate of the ricci host is displayed for confirmation. If it is legitimate, enter the ricci password Important It is strongly advised that you verify the certificate fingerprint of the ricci server you are going to authenticate against. Providing an unverified entity on the network with the ricci password may constitute a confidentiality breach, and communication with an unverified entity may cause an integrity breach. Since each node in the cluster contains all of the configuration information for the cluster, this should provide enough information to add the cluster to the luci interface. Click Connect . The Add Existing Cluster screen then displays the cluster name and the remaining nodes in the cluster. Enter the individual ricci passwords for each node in the cluster, or enter one password and select Use same password for all nodes . Click Add Cluster . The previously-configured cluster now displays on the Manage Clusters screen.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/ch-mgmt-conga-ca
|
Disconnected environments
|
Disconnected environments OpenShift Container Platform 4.18 Managing OpenShift Container Platform clusters in a disconnected environment Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/disconnected_environments/index
|
8.127. ntp
|
8.127. ntp 8.127.1. RHBA-2013:1593 - ntp bug fix and enhancement update Updated ntp packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The Network Time Protocol (NTP) is used to synchronize a computer's time with another reference time source. The ntp packages include the ntpd daemon and utilities used to query and configure ntpd. Note The ntp packages have been upgraded to upstream version 4.2.6p5, which provides a number of bug fixes and enhancements over the version. (BZ# 654004 ) Bug Fixes BZ# 673198 The ntpdate service did not wait for the NetworkManager service to configure the network before attempting to obtain the date and time update from the Internet. Consequently, ntpdate failed to set the system clock if the network was not configured. With this update, ntpdate attempts to obtain updates from the Internet in several increasing intervals if the initial attempt fails. The system clock is now set even when NetworkManager takes longer period of time to configure the network. BZ# 749530 The ntp-keygen utility always used the DES-CBC (Data Encryption Standard-Cipher Block Chaining) encryption algorithm to encrypt private NTP keys. However, DES-CBC is not supported in FIPS mode. Therefore, ntp-keygen generated empty private keys when it was used on systems with FIPS mode enabled. To solve this problem, a new "-C" option has been added to ntp-keygen that allows for selection of an encryption algorithm for private key files. Private NTP keys are now generated as expected on systems with FIPS mode enabled. BZ# 830821 The ntpstat utility did not include the root delay in the "time correct to within" value so the real maximum errors could have been larger than values reported by ntpstat. The ntpstat utility has been fixed to include the root delay as expected and the "time correct to within" values displayed by the utility are now correct. BZ# 862983 When adding NTP servers that were provided by DHCP (using dhclient-script) to the ntp.conf file, the ntp script did not verify whether ntp.conf already contained these servers. This could result in duplicate NTP server entries in the configuration file. This update modifies the ntp script so that duplicate NTP server entries can no longer occur in the ntp.conf file. BZ# 973807 When ntpd was configured as a broadcast client, it did not update the broadcast socket upon change of the network configuration. Consequently, the broadcast client stopped working after the network service had been restarted. This update modifies ntpd to update the broadcast client socket after network interface update so the client continues working after the network service restart as expected. Enhancements BZ# 623616 , BZ# 667524 NTP now specifies four off-site NTP servers with the iburst configuration option in the default ntp.conf file, which results in faster initial time synchronization and improved reliability of the NTP service. BZ# 641800 Support for authentication using SHA1 symetric keys has been added to NTP. SHA1 keys can be generated by the ntp-keygen utility and configured in the /etc/ntp/keys file on the client and server machines. BZ# 835155 Support for signed responses has been added to NTP. This is required when using Samba 4 as an Active Directory (AD) Domain Controller (DC). BZ# 918275 A new miscellaneous ntpd option, "interface", has been added. This option allows control of which network addresses ntpd opens and whether to drop incoming packets without processing or not. For more information on use of the "interface" option, refer to the ntp_misc(5) man page. Users of ntp are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/ntp
|
Chapter 54. Authentication and Interoperability
|
Chapter 54. Authentication and Interoperability sudo unexpectedly denies access when performing group lookups This problem occurs on systems that meet all of these conditions: A group name is configured in a sudoers rule available through multiple Name Service Switch (NSS) sources, such as files or sss . The NSS priority is set to local group definitions. This is true when the /etc/nsswitch.conf file includes the following line: The sudo Defaults option named match_group_by_gid is set to true . This is the default value for the option. Because of the NSS source priority, when the sudo utility tries to look up the GID of the specified group, sudo receives a result that describes only the local group definition. Therefore, if the user is a member of the remote group, but not the local group, the sudoers rule does not match, and sudo denies access. To work around this problem, choose one of the following: Explicitly disable the match_group_by_gid Defaults for sudoers . Open the /etc/sudoers file, and add this line: Configure NSS to prioritize the sss NSS source over files . Open the /etc/nsswitch.conf file, and make sure it lists sss before files : This ensures that sudo permits access to users that belong to the remote group. (BZ#1293306) The KCM credential cache is not suitable for a large number of credentials in a single credential cache If the credential cache contains too many credentials, Kerberos operations, such as klist , fail due to a hardcoded limit on the buffer used to transfer data between the sssd-kcm component and the sssd-secrets component. To work around this problem, add the ccache_storage = memory option in the [kcm] section of the /etc/sssd/sssd.conf file. This instructs the kcm responder to only store the credential caches in-memory, not persistently. Note that if you do this, restarting the system or sssd-kcm clears the credential caches. (BZ# 1448094 ) The sssd-secrets component crashes when it is under load When the sssd-secrets component receives many requests, the situation triggers a bug in the Network Security Services (NSS) library that causes sssd-secrets to terminate unexpectedly. However, the systemd service restarts sssd-secrets for the request, which means that the denial of service is only temporary. (BZ# 1460689 ) SSSD does not correctly handle multiple certificate matching rules with the same priority If a given certificate matches multiple certificate matching rules with the same priority, the System Security Services Daemon (SSSD) uses only one of the rules. As a workaround, use a single certificate matching rule whose LDAP filter consists of the filters of the individual rules concatenated with the | (or) operator. For examples of certificate matching rules, see the sss-certamp(5) man page. (BZ# 1447945 ) SSSD can look up only unique certificates in ID overrides When multiple ID overrides contain the same certificate, the System Security Services Daemon (SSSD) is unable to resolve queries for the users that match the certificate. An attempt to look up these users does not return any user. Note that looking up users by using their user name or UID works as expected. (BZ# 1446101 ) The ipa-advise command does not fully configure smart card authentication The ipa-advise config-server-for-smart-card-auth and ipa-advise config-client-for-smart-card-auth commands do not fully configure the Identity Management (IdM) server and client for smart card authentication. As a consequence, after running the script that the ipa-advise command generated, smart card authentication fails. To work around the problem, see the manual steps for the individual use case in the Linux Domain Identity, Authentication, and Policy Guide: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Linux_Domain_Identity_Authentication_and_Policy_Guide/smart-cards.html (BZ# 1455946 ) The libwbclient library fails to connect to Samba shares hosted on Red Hat Enterprise Linux 7.4 The interface between Samba and the System Security Services Daemon's (SSSD) Winbind plug-in implementation changed. However, this change is missing in SSSD. As a consequence, systems that use the SSSD libwbclient library instead of the Winbind daemon fail to access shares provided by Samba running on Red Hat Enterprise Linux 7.4. There is no workaround available, and Red Hat recommends to not upgrade to Red Hat Enterprise 7.4 if you are using the libwbclient library without running the Winbind daemon. (BZ# 1462769 ) Certificate System ubsystems experience communication problems with TLS_ECDHE_RSA_* ciphers and certain HSMs When certain HSMs are used while TLS_ECDHE_RSA_* ciphers are enabled, subsystems experience communication problems. The issue occurs in the following scenarios: When a CA has been installed and a second subsystem is being installed and tries to contact the CA as a security domain, thus preventing the installation from succeeding. While performing a certificate enrollment on the CA, when archival is required, the CA encounters the same communication problem with the KRA. This scenario can only occur if the offending ciphers were temporarily disabled for the installation. To work around this problem, keep the TLS_ECDHE_RSA_* ciphers turned off if possible. Note that while the Perfect Forward Secrecy provides added security by using the TLS_ECDHE_RSA_* ciphers, each SSL session takes about three times longer to establish. Also, the default TLS_RSA_* ciphers are adequate for the Certificate System operations. (BZ#1256901)
|
[
"sudoers: files sss",
"Defaults !match_group_by_gid",
"sudoers: sss files"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/known_issues_authentication_and_interoperability
|
Chapter 10. LDAP Authentication Setup for Red Hat Quay
|
Chapter 10. LDAP Authentication Setup for Red Hat Quay Lightweight Directory Access Protocol (LDAP) is an open, vendor-neutral, industry standard application protocol for accessing and maintaining distributed directory information services over an Internet Protocol (IP) network. Red Hat Quay supports using LDAP as an identity provider. 10.1. Considerations when enabling LDAP Prior to enabling LDAP for your Red Hat Quay deployment, you should consider the following. Existing Red Hat Quay deployments Conflicts between usernames can arise when you enable LDAP for an existing Red Hat Quay deployment that already has users configured. For example, one user, alice , was manually created in Red Hat Quay prior to enabling LDAP. If the username alice also exists in the LDAP directory, Red Hat Quay automatically creates a new user, alice-1 , when alice logs in for the first time using LDAP. Red Hat Quay then automatically maps the LDAP credentials to the alice account. For consistency reasons, this might be erroneous for your Red Hat Quay deployment. It is recommended that you remove any potentially conflicting local account names from Red Hat Quay prior to enabling LDAP. Manual User Creation and LDAP authentication When Red Hat Quay is configured for LDAP, LDAP-authenticated users are automatically created in Red Hat Quay's database on first log in, if the configuration option FEATURE_USER_CREATION is set to true . If this option is set to false , the automatic user creation for LDAP users fails, and the user is not allowed to log in. In this scenario, the superuser needs to create the desired user account first. Conversely, if FEATURE_USER_CREATION is set to true , this also means that a user can still create an account from the Red Hat Quay login screen, even if there is an equivalent user in LDAP. 10.2. Configuring LDAP for Red Hat Quay Use the following procedure to configure LDAP for your Red Hat Quay deployment. Procedure You can use the Red Hat Quay config tool to configure LDAP. Using the Red Hat Quay config tool, locate the Authentication section. Select LDAP from the dropdown menu, and update the LDAP configuration fields as required. Optional. On the Team synchronization box, and click Enable Team Syncrhonization Support . With team synchronization enabled, Red Hat Quay administrators who are also superusers can set teams to have their membership synchronized with a backing group in LDAP. For Resynchronization duration enter 60m . This option sets the resynchronization duration at which a team must be re-synchronized. This field must be set similar to the following examples: 30m , 1h , 1d . Optional. For Self-service team syncing setup , you can click Allow non-superusers to enable and manage team syncing to allow superusers the ability to enable and manage team syncing under the organizations that they are administrators for. Locate the LDAP URI box and provide a full LDAP URI, including the ldap:// or ldaps:// prefix, for example, ldap://117.17.8.101 . Under Base DN , provide a name which forms the base path for looking up all LDAP records, for example, o=<organization_id> , dc=<example_domain_component> , dc=com . Under User Relative DN , provide a list of Distinguished Name path(s), which form the secondary base path(s) for looking up all user LDAP records relative to the Base DN defined above. For example, uid=<name> , ou=Users , o=<organization_id> , dc=<example_domain_component> , dc=com . This path, or these paths, is tried if the user is not found through the primary relative DN. Note User Relative DN is relative to Base DN , for example, ou=Users and not ou=Users,dc=<example_domain_component>,dc=com . Optional. Provide Secondary User Relative DNs if there are multiple Organizational Units where user objects are located. You can type in the Organizational Units and click Add to add multiple RDNs. For example, ou=Users,ou=NYC and ou=Users,ou=SFO . The User Relative DN searches with subtree scope. For example, if your organization has Organization Units NYC and SFO under the Users OU (that is, ou=SFO,ou=Users and ou=NYC,ou=Users ), Red Hat Quay can authenticate users from both the NYC and SFO Organizational Units if the User Relative DN is set to Users ( ou=Users ). Optional. Fill in the Additional User Filter Expression field for all user lookup queries if desired. Distinguished Names used in the filter must be full based. The Base DN is not added automatically added to this field, and you must wrap the text in parentheses, for example, (memberOf=cn=developers,ou=groups,dc=<example_domain_component>,dc=com) . Fill in the Administrator DN field for the Red Hat Quay administrator account. This account must be able to login and view the records for all users accounts. For example: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com . Fill in the Administrator DN Password field. This is the password for the administrator distinguished name. Important The password for this field is stored in plaintext inside of the config.yaml file. Setting up a dedicated account of using a password hash is highly recommended. Optional. Fill in the UID Attribute field. This is the name of the property field in the LDAP user records that stores your user's username. Most commonly, uid is entered for this field. This field can be used to log into your Red Hat Quay deployment. Optional. Fill in the Mail Attribute field. This is the name of the property field in your LDAP user records that stores your user's e-mail addresses. Most commonly, mail is entered for this field. This field can be used to log into your Red Hat Quay deployment. Note The username to log in must exist in the User Relative DN . If you are using Microsoft Active Directory to setup your LDAP deployment, you must use sAMAccountName for your UID attribute. Optional. You can add a custom SSL/TLS certificate by clicking Choose File under the Custom TLS Certificate optionl. Additionally, you can enable fallbacks to insecure, non-TLS connections by checking the Allow fallback to non-TLS connections box. If you upload an SSl/TLS certificate, you must provide an ldaps:// prefix, for example, LDAP_URI: ldaps://ldap_provider.example.org . Alternatively, you can update your config.yaml file directly to include all relevant information. For example: --- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,dc=<domain_name>,dc=com) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com After you have added all required LDAP fields, click the Save Configuration Changes button to validate the configuration. All validation must succeed before proceeding. Additional configuration can be performed by selecting the Continue Editing button. 10.3. Enabling the LDAP_RESTRICTED_USER_FILTER configuration field The LDAP_RESTRICTED_USER_FILTER configuration field is a subset of the LDAP_USER_FILTER configuration field. When configured, this option allows Red Hat Quay administrators the ability to configure LDAP users as restricted users when Red Hat Quay uses LDAP as its authentication provider. Use the following procedure to enable LDAP restricted users on your Red Hat Quay deployment. Prerequisites Your Red Hat Quay deployment uses LDAP as its authentication provider. You have configured the LDAP_USER_FILTER field in your config.yaml file. Procedure In your deployment's config.yaml file, add the LDAP_RESTRICTED_USER_FILTER parameter and specify the group of restricted users, for example, members : --- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_RESTRICTED_USER_FILTER: (<filterField>=<value>) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com Start, or restart, your Red Hat Quay deployment. After enabling the LDAP_RESTRICTED_USER_FILTER feature, your LDAP Red Hat Quay users are restricted from reading and writing content, and creating organizations. 10.4. Enabling the LDAP_SUPERUSER_FILTER configuration field With the LDAP_SUPERUSER_FILTER field configured, Red Hat Quay administrators can configure Lightweight Directory Access Protocol (LDAP) users as superusers if Red Hat Quay uses LDAP as its authentication provider. Use the following procedure to enable LDAP superusers on your Red Hat Quay deployment. Prerequisites Your Red Hat Quay deployment uses LDAP as its authentication provider. You have configured the LDAP_USER_FILTER field field in your config.yaml file. Procedure In your deployment's config.yaml file, add the LDAP_SUPERUSER_FILTER parameter and add the group of users you want configured as super users, for example, root : --- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_SUPERUSER_FILTER: (<filterField>=<value>) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com Start, or restart, your Red Hat Quay deployment. After enabling the LDAP_SUPERUSER_FILTER feature, your LDAP Red Hat Quay users have superuser privileges. The following options are available to superusers: Manage users Manage organizations Manage service keys View the change log Query the usage logs Create globally visible user messages 10.5. Common LDAP configuration issues The following errors might be returned with an invalid configuration. Invalid credentials . If you receive this error, the Administrator DN or Administrator DN password values are incorrect. Ensure that you are providing accurate Administrator DN and password values. *Verification of superuser %USERNAME% failed . This error is returned for the following reasons: The username has not been found. The user does not exist in the remote authentication system. LDAP authorization is configured improperly. Cannot find the current logged in user . When configuring LDAP for Red Hat Quay, there may be situations where the LDAP connection is established successfully using the username and password provided in the Administrator DN fields. However, if the current logged-in user cannot be found within the specified User Relative DN path using the UID Attribute or Mail Attribute fields, there are typically two potential reasons for this: The current logged in user does not exist in the User Relative DN path. The Administrator DN does not have rights to search or read the specified LDAP path. To fix this issue, ensure that the logged in user is included in the User Relative DN path, or provide the correct permissions to the Administrator DN account. 10.6. LDAP configuration fields For a full list of LDAP configuration fields, see LDAP configuration fields
|
[
"--- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,dc=<domain_name>,dc=com) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com",
"--- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_RESTRICTED_USER_FILTER: (<filterField>=<value>) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com",
"--- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_SUPERUSER_FILTER: (<filterField>=<value>) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/manage_red_hat_quay/ldap-authentication-setup-for-quay-enterprise
|
Chapter 21. Project [config.openshift.io/v1]
|
Chapter 21. Project [config.openshift.io/v1] Description Project holds cluster-wide information about Project. The canonical name is cluster Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 21.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 21.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description projectRequestMessage string projectRequestMessage is the string presented to a user if they are unable to request a project via the projectrequest api endpoint projectRequestTemplate object projectRequestTemplate is the template to use for creating projects in response to projectrequest. This must point to a template in 'openshift-config' namespace. It is optional. If it is not specified, a default template is used. 21.1.2. .spec.projectRequestTemplate Description projectRequestTemplate is the template to use for creating projects in response to projectrequest. This must point to a template in 'openshift-config' namespace. It is optional. If it is not specified, a default template is used. Type object Property Type Description name string name is the metadata.name of the referenced project request template 21.1.3. .status Description status holds observed values from the cluster. They may not be overridden. Type object 21.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/projects DELETE : delete collection of Project GET : list objects of kind Project POST : create a Project /apis/config.openshift.io/v1/projects/{name} DELETE : delete a Project GET : read the specified Project PATCH : partially update the specified Project PUT : replace the specified Project /apis/config.openshift.io/v1/projects/{name}/status GET : read status of the specified Project PATCH : partially update status of the specified Project PUT : replace status of the specified Project 21.2.1. /apis/config.openshift.io/v1/projects HTTP method DELETE Description delete collection of Project Table 21.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Project Table 21.2. HTTP responses HTTP code Reponse body 200 - OK ProjectList schema 401 - Unauthorized Empty HTTP method POST Description create a Project Table 21.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.4. Body parameters Parameter Type Description body Project schema Table 21.5. HTTP responses HTTP code Reponse body 200 - OK Project schema 201 - Created Project schema 202 - Accepted Project schema 401 - Unauthorized Empty 21.2.2. /apis/config.openshift.io/v1/projects/{name} Table 21.6. Global path parameters Parameter Type Description name string name of the Project HTTP method DELETE Description delete a Project Table 21.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 21.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Project Table 21.9. HTTP responses HTTP code Reponse body 200 - OK Project schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Project Table 21.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.11. HTTP responses HTTP code Reponse body 200 - OK Project schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Project Table 21.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.13. Body parameters Parameter Type Description body Project schema Table 21.14. HTTP responses HTTP code Reponse body 200 - OK Project schema 201 - Created Project schema 401 - Unauthorized Empty 21.2.3. /apis/config.openshift.io/v1/projects/{name}/status Table 21.15. Global path parameters Parameter Type Description name string name of the Project HTTP method GET Description read status of the specified Project Table 21.16. HTTP responses HTTP code Reponse body 200 - OK Project schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Project Table 21.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.18. HTTP responses HTTP code Reponse body 200 - OK Project schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Project Table 21.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.20. Body parameters Parameter Type Description body Project schema Table 21.21. HTTP responses HTTP code Reponse body 200 - OK Project schema 201 - Created Project schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/config_apis/project-config-openshift-io-v1
|
Chapter 2. Understanding API compatibility guidelines
|
Chapter 2. Understanding API compatibility guidelines Important This guidance does not cover layered OpenShift Container Platform offerings. 2.1. API compatibility guidelines Red Hat recommends that application developers adopt the following principles in order to improve compatibility with OpenShift Container Platform: Use APIs and components with support tiers that match the application's need. Build applications using the published client libraries where possible. Applications are only guaranteed to run correctly if they execute in an environment that is as new as the environment it was built to execute against. An application that was built for OpenShift Container Platform 4.7 is not guaranteed to function properly on OpenShift Container Platform 4.6. Do not design applications that rely on configuration files provided by system packages or other components. These files can change between versions unless the upstream community is explicitly committed to preserving them. Where appropriate, depend on any Red Hat provided interface abstraction over those configuration files in order to maintain forward compatibility. Direct file system modification of configuration files is discouraged, and users are strongly encouraged to integrate with an Operator provided API where available to avoid dual-writer conflicts. Do not depend on API fields prefixed with unsupported<FieldName> or annotations that are not explicitly mentioned in product documentation. Do not depend on components with shorter compatibility guarantees than your application. Do not perform direct storage operations on the etcd server. All etcd access must be performed via the api-server or through documented backup and restore procedures. Red Hat recommends that application developers follow the compatibility guidelines defined by Red Hat Enterprise Linux (RHEL). OpenShift Container Platform strongly recommends the following guidelines when building an application or hosting an application on the platform: Do not depend on a specific Linux kernel or OpenShift Container Platform version. Avoid reading from proc , sys , and debug file systems, or any other pseudo file system. Avoid using ioctls to directly interact with hardware. Avoid direct interaction with cgroups in order to not conflict with OpenShift Container Platform host-agents that provide the container execution environment. Note During the lifecycle of a release, Red Hat makes commercially reasonable efforts to maintain API and application operating environment (AOE) compatibility across all minor releases and z-stream releases. If necessary, Red Hat might make exceptions to this compatibility goal for critical impact security or other significant issues. 2.2. API compatibility exceptions The following are exceptions to compatibility in OpenShift Container Platform: RHEL CoreOS file system modifications not made with a supported Operator No assurances are made at this time that a modification made to the host operating file system is preserved across minor releases except for where that modification is made through the public interface exposed via a supported Operator, such as the Machine Config Operator or Node Tuning Operator. Modifications to cluster infrastructure in cloud or virtualized environments No assurances are made at this time that a modification to the cloud hosting environment that supports the cluster is preserved except for where that modification is made through a public interface exposed in the product or is documented as a supported configuration. Cluster infrastructure providers are responsible for preserving their cloud or virtualized infrastructure except for where they delegate that authority to the product through an API. Functional defaults between an upgraded cluster and a new installation No assurances are made at this time that a new installation of a product minor release will have the same functional defaults as a version of the product that was installed with a prior minor release and upgraded to the equivalent version. For example, future versions of the product may provision cloud infrastructure with different defaults than prior minor versions. In addition, different default security choices may be made in future versions of the product than those made in past versions of the product. Past versions of the product will forward upgrade, but preserve legacy choices where appropriate specifically to maintain backwards compatibility. Usage of API fields that have the prefix "unsupported" or undocumented annotations Select APIs in the product expose fields with the prefix unsupported<FieldName> . No assurances are made at this time that usage of this field is supported across releases or within a release. Product support can request a customer to specify a value in this field when debugging specific problems, but its usage is not supported outside of that interaction. Usage of annotations on objects that are not explicitly documented are not assured support across minor releases. API availability per product installation topology The OpenShift distribution will continue to evolve its supported installation topology, and not all APIs in one install topology will necessarily be included in another. For example, certain topologies may restrict read/write access to particular APIs if they are in conflict with the product installation topology or not include a particular API at all if not pertinent to that topology. APIs that exist in a given topology will be supported in accordance with the compatibility tiers defined above. 2.3. API compatibility common terminology 2.3.1. Application Programming Interface (API) An API is a public interface implemented by a software program that enables it to interact with other software. In OpenShift Container Platform, the API is served from a centralized API server and is used as the hub for all system interaction. 2.3.2. Application Operating Environment (AOE) An AOE is the integrated environment that executes the end-user application program. The AOE is a containerized environment that provides isolation from the host operating system (OS). At a minimum, AOE allows the application to run in an isolated manner from the host OS libraries and binaries, but still share the same OS kernel as all other containers on the host. The AOE is enforced at runtime and it describes the interface between an application and its operating environment. It includes intersection points between the platform, operating system and environment, with the user application including projection of downward API, DNS, resource accounting, device access, platform workload identity, isolation among containers, isolation between containers and host OS. The AOE does not include components that might vary by installation, such as Container Network Interface (CNI) plugin selection or extensions to the product such as admission hooks. Components that integrate with the cluster at a level below the container environment might be subjected to additional variation between versions. 2.3.3. Compatibility in a virtualized environment Virtual environments emulate bare-metal environments such that unprivileged applications that run on bare-metal environments will run, unmodified, in corresponding virtual environments. Virtual environments present simplified abstracted views of physical resources, so some differences might exist. 2.3.4. Compatibility in a cloud environment OpenShift Container Platform might choose to offer integration points with a hosting cloud environment via cloud provider specific integrations. The compatibility of these integration points are specific to the guarantee provided by the native cloud vendor and its intersection with the OpenShift Container Platform compatibility window. Where OpenShift Container Platform provides an integration with a cloud environment natively as part of the default installation, Red Hat develops against stable cloud API endpoints to provide commercially reasonable support with forward looking compatibility that includes stable deprecation policies. Example areas of integration between the cloud provider and OpenShift Container Platform include, but are not limited to, dynamic volume provisioning, service load balancer integration, pod workload identity, dynamic management of compute, and infrastructure provisioned as part of initial installation. 2.3.5. Major, minor, and z-stream releases A Red Hat major release represents a significant step in the development of a product. Minor releases appear more frequently within the scope of a major release and represent deprecation boundaries that might impact future application compatibility. A z-stream release is an update to a minor release which provides a stream of continuous fixes to an associated minor release. API and AOE compatibility is never broken in a z-stream release except when this policy is explicitly overridden in order to respond to an unforeseen security impact. For example, in the release 4.3.2: 4 is the major release version 3 is the minor release version 2 is the z-stream release version 2.3.6. Extended user support (EUS) A minor release in an OpenShift Container Platform major release that has an extended support window for critical bug fixes. Users are able to migrate between EUS releases by incrementally adopting minor versions between EUS releases. It is important to note that the deprecation policy is defined across minor releases and not EUS releases. As a result, an EUS user might have to respond to a deprecation when migrating to a future EUS while sequentially upgrading through each minor release. 2.3.7. Developer Preview An optional product capability that is not officially supported by Red Hat, but is intended to provide a mechanism to explore early phase technology. By default, Developer Preview functionality is opt-in, and subject to removal at any time. Enabling a Developer Preview feature might render a cluster unsupportable dependent upon the scope of the feature. If you are a Red( )Hat customer or partner and have feedback about these developer preview versions, file an issue by using the OpenShift Bugs tracker . Do not use the formal Red( )Hat support service ticket process. You can read more about support handling in the following knowledge article . 2.3.8. Technology Preview An optional product capability that provides early access to upcoming product innovations to test functionality and provide feedback during the development process. The feature is not fully supported, might not be functionally complete, and is not intended for production use. Usage of a Technology Preview function requires explicit opt-in. Learn more about the Technology Preview Features Support Scope .
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/api_overview/compatibility-guidelines
|
12.2. Partition-based Storage Pools
|
12.2. Partition-based Storage Pools This section covers using a pre-formatted block device, a partition, as a storage pool. For the following examples, a host physical machine has a 500GB hard drive ( /dev/sdc ) partitioned into one 500GB, ext4 formatted partition ( /dev/sdc1 ). We set up a storage pool for it using the procedure below. 12.2.1. Creating a Partition-based Storage Pool Using virt-manager This procedure creates a new storage pool using a partition of a storage device. Procedure 12.1. Creating a partition-based storage pool with virt-manager Open the storage pool settings In the virt-manager graphical interface, select the host physical machine from the main window. Open the Edit menu and select Connection Details Figure 12.1. Connection Details Click on the Storage tab of the Connection Details window. Figure 12.2. Storage tab Create the new storage pool Add a new pool (part 1) Press the + button (the add pool button). The Add a New Storage Pool wizard appears. Choose a Name for the storage pool. This example uses the name guest_images_fs . Change the Type to fs: Pre-Formatted Block Device . Figure 12.3. Storage pool name and type Press the Forward button to continue. Add a new pool (part 2) Change the Target Path , Format , and Source Path fields. Figure 12.4. Storage pool path and format Target Path Enter the location to mount the source device for the storage pool in the Target Path field. If the location does not already exist, virt-manager will create the directory. Format Select a format from the Format list. The device is formatted with the selected format. This example uses the ext4 file system, the default Red Hat Enterprise Linux file system. Source Path Enter the device in the Source Path field. This example uses the /dev/sdc1 device. Verify the details and press the Finish button to create the storage pool. Verify the new storage pool The new storage pool appears in the storage list on the left after a few seconds. Verify the size is reported as expected, 458.20 GB Free in this example. Verify the State field reports the new storage pool as Active . Select the storage pool. In the Autostart field, click the On Boot check box. This will make sure the storage device starts whenever the libvirtd service starts. Figure 12.5. Storage list confirmation The storage pool is now created, close the Connection Details window.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virtualization-storage_pools-creating-file_systems
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/securing_networks/proc_providing-feedback-on-red-hat-documentation_securing-networks
|
18.4. Configuration Examples
|
18.4. Configuration Examples 18.4.1. Setting up CVS This example describes a simple CVS setup and an SELinux configuration which allows remote access. Two hosts are used in this example; a CVS server with a host name of cvs-srv with an IP address of 192.168.1.1 and a client with a host name of cvs-client and an IP address of 192.168.1.100 . Both hosts are on the same subnet (192.168.1.0/24). This is an example only and assumes that the cvs and xinetd packages are installed, that the SELinux targeted policy is used, and that SELinux is running in enforced mode. This example will show that even with full DAC permissions, SELinux can still enforce policy rules based on file labels and only allow access to certain areas that have been specifically labeled for access by CVS. Note Steps 1-9 are supposed be performed on the CVS server, cvs-srv . This example requires the cvs and xinetd packages. Confirm that the packages are installed: If they are not installed, use the yum utility as root to install it: Enter the following command as root to create a group named CVS : This can by also done by using the system-config-users utility. Create a user with a user name of cvsuser and make this user a member of the CVS group. This can be done using system-config-users . Edit the /etc/services file and make sure that the CVS server has uncommented entries looking similar to the following: Create the CVS repository in the root area of the file system. When using SELinux, it is best to have the repository in the root file system so that recursive labels can be given to it without affecting any other subdirectories. For example, as root, create a /cvs/ directory to house the repository: Give full permissions to the /cvs/ directory to all users: Warning This is an example only and these permissions should not be used in a production system. Edit the /etc/xinetd.d/cvs file and make sure that the CVS section is uncommented and configured to use the /cvs/ directory. The file should look similar to: Start the xinetd daemon: Add a rule which allows inbound connections through TCP on port 2401 by using the system-config-firewall utility. On the client side, enter the following command as the cvsuser user: At this point, CVS has been configured but SELinux will still deny logins and file access. To demonstrate this, set the USDCVSROOT variable on cvs-client and try to log in remotely. The following step is supposed to be performed on cvs-client : SELinux has blocked access. In order to get SELinux to allow this access, the following step is supposed to be performed on cvs-srv : Change the context of the /cvs/ directory as root in order to recursively label any existing and new data in the /cvs/ directory, giving it the cvs_data_t type: The client, cvs-client should now be able to log in and access all CVS resources in this repository:
|
[
"[cvs-srv]USD rpm -q cvs xinetd package cvs is not installed package xinetd is not installed",
"yum install cvs xinetd",
"groupadd CVS",
"cvspserver 2401/tcp # CVS client/server operations cvspserver 2401/udp # CVS client/server operations",
"mkdir /cvs",
"chmod -R 777 /cvs",
"service cvspserver { disable = no port = 2401 socket_type = stream protocol = tcp wait = no user = root passenv = PATH server = /usr/bin/cvs env = HOME=/cvs server_args = -f --allow-root=/cvs pserver # bind = 127.0.0.1",
"systemctl start xinetd.service",
"[cvsuser@cvs-client]USD cvs -d /cvs init",
"[cvsuser@cvs-client]USD export CVSROOT=:pserver:[email protected]:/cvs [cvsuser@cvs-client]USD [cvsuser@cvs-client]USD cvs login Logging in to :pserver:[email protected]:2401/cvs CVS password: ******** cvs [login aborted]: unrecognized auth response from 192.168.100.1: cvs pserver: cannot open /cvs/CVSROOT/config: Permission denied",
"semanage fcontext -a -t cvs_data_t '/cvs(/.*)?' restorecon -R -v /cvs",
"[cvsuser@cvs-client]USD export CVSROOT=:pserver:[email protected]:/cvs [cvsuser@cvs-client]USD [cvsuser@cvs-client]USD cvs login Logging in to :pserver:[email protected]:2401/cvs CVS password: ******** [cvsuser@cvs-client]USD"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-concurrent_versioning_system-configuration_examples
|
33.2. Single-User Mode
|
33.2. Single-User Mode Single-user mode provides a Linux environment for a single user that allows you to recover your system from problems that cannot be resolved in networked multi-user environment. You do not need an external boot device to be able to boot into single-user mode , and you can switch into it directly while the system is running. To switch into single-user mode on the running system, issue the following command from the command line: In single-user mode , the system boots with your local file systems mounted, many important services running, and a usable maintenance shell that allows you to perform many of the usual system commands. Therefore, single-user mode is mostly useful for resolving problems when the system boots but does not function properly or you cannot log into it. Warning The single-user mode automatically tries to mount your local file systems. Booting to single-user mode could result in loss of data if any of your local file systems cannot be successfully mounted. To boot into single-user mode follow this procedure: Procedure 33.2. Booting into Single-User Mode At the GRUB boot screen, press any key to enter the GRUB interactive menu. Select Red Hat Enterprise Linux with the version of the kernel that you want to boot and press the a to append the line. Type single as a separate word at the end of the line and press Enter to exit GRUB edit mode. Alternatively, you can type 1 instead of single.
|
[
"~]# init 1"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-single-user_mode
|
Chapter 20. JbodStorage schema reference
|
Chapter 20. JbodStorage schema reference Used in: KafkaClusterSpec , KafkaNodePoolSpec The type property is a discriminator that distinguishes use of the JbodStorage type from EphemeralStorage , PersistentClaimStorage . It must have the value jbod for the type JbodStorage . Property Property type Description type string Must be jbod . volumes EphemeralStorage , PersistentClaimStorage array List of volumes as Storage objects representing the JBOD disks array.
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-JbodStorage-reference
|
5.5.4. Adding a GULM Client-only Member
|
5.5.4. Adding a GULM Client-only Member The procedure for adding a member to a running GULM cluster depends on the type of GULM node: either a node that functions only as a GULM client (a cluster member capable of running applications, but not eligible to function as a GULM lock server) or a node that functions as a GULM lock server. This procedure describes how to add a member that functions only as a GULM client. To add a member that functions as a GULM lock server, refer to Section 5.5.6, "Adding or Deleting a GULM Lock Server Member" . To add a member that functions only as a GULM client to an existing cluster that is currently in operation, follow these steps: At one of the running members, start system-config-cluster (refer to Section 5.2, "Starting the Cluster Configuration Tool " ). At the Cluster Configuration Tool tab, add the node and configure fencing for it as in Section 5.5.1, "Adding a Member to a New Cluster" . Click Send to Cluster to propagate the updated configuration to other running nodes in the cluster. Use the scp command to send the updated /etc/cluster/cluster.conf file from one of the existing cluster nodes to the new node. Start cluster services on the new node by running the following commands in this order: service ccsd start service lock_gulmd start service clvmd start , if CLVM has been used to create clustered volumes service gfs start , if you are using Red Hat GFS service rgmanager start , if the cluster is running high-availability services ( rgmanager ) At system-config-cluster , in the Cluster Configuration Tool tab, verify that the configuration is correct. At the Cluster Status Tool tab verify that the nodes and services are running as expected. Note Make sure to configure other parameters that may be affected by changes in this section. Refer to Section 5.1, "Configuration Tasks" .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s2-add-member-gulm-client-ca
|
11.2. Enabling Machine-wide Extensions
|
11.2. Enabling Machine-wide Extensions To make extensions available to all users on the system, install them in the /usr/share/gnome-shell/extensions directory. You need to set the org.gnome.shell.enabled-extensions key in order to set the default enabled extensions. However, there is currently no way to enable additional extensions for users who have already logged in. This does not apply for existing users who have installed and enabled their own GNOME extensions. Procedure 11.2. Enabling machine-wide extensions Create a local database file for machine-wide settings in /etc/dconf/db/local.d/00-extensions : The enabled-extensions key specifies the enabled extensions using the extensions' uuid ( [email protected] and [email protected] ). Update the system databases: Users must log out and back in again before the system-wide settings take effect.
|
[
"List all extensions that you want to have enabled for all users enabled-extensions=[' [email protected] ', ' [email protected] ']",
"dconf update"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/extensions-enable
|
2.8.9.2.4. IPTables Match Options
|
2.8.9.2.4. IPTables Match Options Different network protocols provide specialized matching options which can be configured to match a particular packet using that protocol. However, the protocol must first be specified in the iptables command. For example, -p <protocol-name> enables options for the specified protocol. Note that you can also use the protocol ID, instead of the protocol name. Refer to the following examples, each of which have the same effect: Service definitions are provided in the /etc/services file. For readability, it is recommended that you use the service names rather than the port numbers. Warning Secure the /etc/services file to prevent unauthorized editing. If this file is editable, attackers can use it to enable ports on your machine you have otherwise closed. To secure this file, run the following commands as root: This prevents the file from being renamed, deleted or having links made to it. 2.8.9.2.4.1. TCP Protocol These match options are available for the TCP protocol ( -p tcp ): --dport - Sets the destination port for the packet. To configure this option, use a network service name (such as www or smtp); a port number; or a range of port numbers. To specify a range of port numbers, separate the two numbers with a colon ( : ). For example: -p tcp --dport 3000:3200 . The largest acceptable valid range is 0:65535 . Use an exclamation point character ( ! ) after the --dport option to match all packets that do not use that network service or port. To browse the names and aliases of network services and the port numbers they use, view the /etc/services file. The --destination-port match option is synonymous with --dport . --sport - Sets the source port of the packet using the same options as --dport . The --source-port match option is synonymous with --sport . --syn - Applies to all TCP packets designed to initiate communication, commonly called SYN packets . Any packets that carry a data payload are not touched. Use an exclamation point character ( ! ) before the --syn option to match all non-SYN packets. --tcp-flags <tested flag list> <set flag list> - Allows TCP packets that have specific bits (flags) set, to match a rule. The --tcp-flags match option accepts two parameters. The first parameter is the mask; a comma-separated list of flags to be examined in the packet. The second parameter is a comma-separated list of flags that must be set for the rule to match. The possible flags are: ACK FIN PSH RST SYN URG ALL NONE For example, an iptables rule that contains the following specification only matches TCP packets that have the SYN flag set and the ACK and FIN flags not set: --tcp-flags ACK,FIN,SYN SYN Use the exclamation point character ( ! ) after the --tcp-flags to reverse the effect of the match option. --tcp-option - Attempts to match with TCP-specific options that can be set within a particular packet. This match option can also be reversed by using the exclamation point character ( ! ) after the option. 2.8.9.2.4.2. UDP Protocol These match options are available for the UDP protocol ( -p udp ): --dport - Specifies the destination port of the UDP packet, using the service name, port number, or range of port numbers. The --destination-port match option is synonymous with --dport . --sport - Specifies the source port of the UDP packet, using the service name, port number, or range of port numbers. The --source-port match option is synonymous with --sport . For the --dport and --sport options, to specify a range of port numbers, separate the two numbers with a colon (:). For example: -p tcp --dport 3000:3200 . The largest acceptable valid range is 0:65535 . 2.8.9.2.4.3. ICMP Protocol The following match option is available for the Internet Control Message Protocol (ICMP) ( -p icmp ): --icmp-type - Sets the name or number of the ICMP type to match with the rule. A list of valid ICMP names can be retrieved by typing the iptables -p icmp -h command. 2.8.9.2.4.4. Additional Match Option Modules Additional match options are available through modules loaded by the iptables command. To use a match option module, load the module by name using the -m <module-name> , where <module-name> is the name of the module. Many modules are available by default. You can also create modules to provide additional functionality. The following is a partial list of the most commonly used modules: limit module - Places limits on how many packets are matched to a particular rule. When used in conjunction with the LOG target, the limit module can prevent a flood of matching packets from filling up the system log with repetitive messages or using up system resources. Refer to Section 2.8.9.2.5, "Target Options" for more information about the LOG target. The limit module enables the following options: --limit - Sets the maximum number of matches for a particular time period, specified as a <value>/<period> pair. For example, using --limit 5/hour allows five rule matches per hour. Periods can be specified in seconds, minutes, hours, or days. If a number and time modifier are not used, the default value of 3/hour is assumed. --limit-burst - Sets a limit on the number of packets able to match a rule at one time. This option is specified as an integer and should be used in conjunction with the --limit option. If no value is specified, the default value of five (5) is assumed. state module - Enables state matching. The state module enables the following options: --state - match a packet with the following connection states: ESTABLISHED - The matching packet is associated with other packets in an established connection. You need to accept this state if you want to maintain a connection between a client and a server. INVALID - The matching packet cannot be tied to a known connection. NEW - The matching packet is either creating a new connection or is part of a two-way connection not previously seen. You need to accept this state if you want to allow new connections to a service. RELATED - The matching packet is starting a new connection related in some way to an existing connection. An example of this is FTP, which uses one connection for control traffic (port 21), and a separate connection for data transfer (port 20). These connection states can be used in combination with one another by separating them with commas, such as -m state --state INVALID,NEW . mac module - Enables hardware MAC address matching. The mac module enables the following option: --mac-source - Matches a MAC address of the network interface card that sent the packet. To exclude a MAC address from a rule, place an exclamation point character ( ! ) after the --mac-source match option. Refer to the iptables man page for more match options available through modules.
|
[
"~]# iptables -A INPUT -p icmp --icmp-type any -j ACCEPT ~]# iptables -A INPUT -p 5813 --icmp-type any -j ACCEPT",
"~]# chown root.root /etc/services ~]# chmod 0644 /etc/services ~]# chattr +i /etc/services"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-command_options_for_iptables-iptables_match_options
|
Chapter 61. Next steps
|
Chapter 61. steps Getting started with decision services Designing a decision service using guided decision tables
| null |
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/next_steps_3
|
Chapter 9. Uninstalling an IdM server
|
Chapter 9. Uninstalling an IdM server Follow this procedure to uninstall an Identity Management (IdM) server named server123.idm.example.com (server123). In the procedure, you first ensure that other servers are running critical services and that the topology will continue to be redundant before performing the uninstallation. Prerequisites You have root access to server123. You have an IdM administrator's credentials. Procedure If your IdM environment uses integrated DNS, ensure that server123 is not the only enabled DNS server: If server123 is the only remaining DNS server in the topology, add the DNS server role to another IdM server. For more information, see the ipa-dns-install(1) man page on your system. If your IdM environment uses an integrated certificate authority (CA): Ensure that server123 is not the only enabled CA server: If server123 is the only remaining CA server in the topology, add the CA server role to another IdM server. For more information, see the ipa-ca-install(1) man page on your system. If you have enabled vaults in your IdM environment, ensure that server123.idm.example.com is not the only enabled Key Recovery Authority (KRA) server: If server123 is the only remaining KRA server in the topology, add the KRA server role to another IdM server. For more information, see man ipa-kra-install(1) . Ensure that server123.idm.example.com is not the CA renewal server: If server123 is the CA renewal server, see Changing and resetting IdM CA renewal server for more information about how to move the CA renewal server role to another server. Ensure that server123.idm.example.com is not the current certificate revocation list (CRL) publisher: If the output shows that CRL generation is enabled on server123, see Generating CRL on an IdM CA server for more information about how to move the CRL publisher role to another server. Connect to another IdM server in the topology: On the server, obtain the IdM administrator's credentials: View the DNA ID ranges assigned to the servers in the topology: The output shows that a DNA ID range is assigned to both server123 and server456. If server123 is the only IdM server in the topology with a DNA ID range assigned, create a test IdM user on server456 to ensure that the server has a DNA ID range assigned: Delete server123.idm.example.com from the topology: Important If deleting server123 would lead to a disconnected topology, the script warns you about it. For information about how to create a replication agreement between the remaining replicas so that the deletion can proceed, see Setting up replication between two servers using the CLI . Note Running the ipa server-del command removes all replication data and agreements related to server123 for both the domain and ca suffixes. This is in contrast to Domain Level 0 IdM topologies, where you initially had to remove these data by using the ipa-replica-manage del server123 command. Domain Level 0 IdM topologies are those running on RHEL 7.2 and earlier. Use the ipa domainlevel-get command to view the current domain level. Return to server123.idm.example.com and uninstall the existing IdM installation: Ensure that all name server (NS) DNS records pointing to server123.idm.example.com are deleted from your DNS zones. This applies regardless of whether you use integrated DNS managed by IdM or external DNS. For more information about how to delete DNS records from IdM, see Deleting DNS records in the IdM CLI . Additional resources Displaying and raising the domain level in RHEL 7 documentation Planning the replica topology Explanation of IdM CA renewal server Generating CRL on an IdM CA server
|
[
"ipa server-role-find --role 'DNS server' ---------------------- 2 server roles matched ---------------------- Server name: server456.idm.example.com Role name: DNS server Role status: enabled [...] ---------------------------- Number of entries returned 2 ----------------------------",
"ipa server-role-find --role 'CA server' ---------------------- 2 server roles matched ---------------------- Server name: server123.idm.example.com Role name: CA server Role status: enabled Server name: r8server.idm.example.com Role name: CA server Role status: enabled ---------------------------- Number of entries returned 2 ----------------------------",
"ipa server-role-find --role 'KRA server' ---------------------- 2 server roles matched ---------------------- Server name: server123.idm.example.com Role name: KRA server Role status: enabled Server name: r8server.idm.example.com Role name: KRA server Role status: enabled ---------------------------- Number of entries returned 2 ----------------------------",
"ipa config-show | grep 'CA renewal' IPA CA renewal master: r8server.idm.example.com",
"ipa-crlgen-manage status CRL generation: disabled",
"ssh idm_user@server456",
"[idm_user@server456 ~]USD kinit admin",
"[idm_user@server456 ~]USD ipa-replica-manage dnarange-show server123.idm.example.com: 1001-1500 server456.idm.example.com: 1501-2000 [...]",
"[idm_user@server456 ~]USD ipa user-add test_idm_user",
"[idm_user@server456 ~]USD ipa server-del server123.idm.example.com",
"ipa-server-install --uninstall Are you sure you want to continue with the uninstall procedure? [no]: true"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_identity_management/uninstalling-an-ipa-server_installing-identity-management
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/custom_block_storage_back_end_deployment_guide/making-open-source-more-inclusive
|
Chapter 6. Updating hosted control planes
|
Chapter 6. Updating hosted control planes Updates for hosted control planes involve updating the hosted cluster and the node pools. For a cluster to remain fully operational during an update process, you must meet the requirements of the Kubernetes version skew policy while completing the control plane and node updates. 6.1. Requirements to upgrade hosted control planes The multicluster engine for Kubernetes Operator can manage one or more OpenShift Container Platform clusters. After you create a hosted cluster on OpenShift Container Platform, you must import your hosted cluster in the multicluster engine Operator as a managed cluster. Then, you can use the OpenShift Container Platform cluster as a management cluster. Consider the following requirements before you start updating hosted control planes: You must use the bare metal platform for an OpenShift Container Platform cluster when using OpenShift Virtualization as a provider. You must use bare metal or OpenShift Virtualization as the cloud platform for the hosted cluster. You can find the platform type of your hosted cluster in the spec.Platform.type specification of the HostedCluster custom resource (CR). You must upgrade the OpenShift Container Platform cluster, multicluster engine Operator, hosted cluster, and node pools by completing the following tasks: Upgrade an OpenShift Container Platform cluster to the latest version. For more information, see "Updating a cluster using the web console" or "Updating a cluster using the CLI". Upgrade the multicluster engine Operator to the latest version. For more information, see "Updating installed Operators". Upgrade the hosted cluster and node pools from the OpenShift Container Platform version to the latest version. For more information, see "Updating a control plane in a hosted cluster" and "Updating node pools in a hosted cluster". Additional resources Updating a cluster using the web console Updating a cluster using the CLI Updating installed Operators 6.2. Setting channels in a hosted cluster You can see available updates in the HostedCluster.Status field of the HostedCluster custom resource (CR). The available updates are not fetched from the Cluster Version Operator (CVO) of a hosted cluster. The list of the available updates can be different from the available updates from the following fields of the HostedCluster custom resource (CR): status.version.availableUpdates status.version.conditionalUpdates The initial HostedCluster CR does not have any information in the status.version.availableUpdates and status.version.conditionalUpdates fields. After you set the spec.channel field to the stable OpenShift Container Platform release version, the HyperShift Operator reconciles the HostedCluster CR and updates the status.version field with the available and conditional updates. See the following example of the HostedCluster CR that contains the channel configuration: spec: autoscaling: {} channel: stable-4.y 1 clusterID: d6d42268-7dff-4d37-92cf-691bd2d42f41 configuration: {} controllerAvailabilityPolicy: SingleReplica dns: baseDomain: dev11.red-chesterfield.com privateZoneID: Z0180092I0DQRKL55LN0 publicZoneID: Z00206462VG6ZP0H2QLWK 1 Replace <4.y> with the OpenShift Container Platform release version you specified in spec.release . For example, if you set the spec.release to ocp-release:4.16.4-multi , you must set spec.channel to stable-4.16 . After you configure the channel in the HostedCluster CR, to view the output of the status.version.availableUpdates and status.version.conditionalUpdates fields, run the following command: USD oc get -n <hosted_cluster_namespace> hostedcluster <hosted_cluster_name> -o yaml Example output version: availableUpdates: - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:b7517d13514c6308ae16c5fd8108133754eb922cd37403ed27c846c129e67a9a url: https://access.redhat.com/errata/RHBA-2024:6401 version: 4.16.11 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:d08e7c8374142c239a07d7b27d1170eae2b0d9f00ccf074c3f13228a1761c162 url: https://access.redhat.com/errata/RHSA-2024:6004 version: 4.16.10 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:6a80ac72a60635a313ae511f0959cc267a21a89c7654f1c15ee16657aafa41a0 url: https://access.redhat.com/errata/RHBA-2024:5757 version: 4.16.9 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:ea624ae7d91d3f15094e9e15037244679678bdc89e5a29834b2ddb7e1d9b57e6 url: https://access.redhat.com/errata/RHSA-2024:5422 version: 4.16.8 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:e4102eb226130117a0775a83769fe8edb029f0a17b6cbca98a682e3f1225d6b7 url: https://access.redhat.com/errata/RHSA-2024:4965 version: 4.16.6 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:f828eda3eaac179e9463ec7b1ed6baeba2cd5bd3f1dd56655796c86260db819b url: https://access.redhat.com/errata/RHBA-2024:4855 version: 4.16.5 conditionalUpdates: - conditions: - lastTransitionTime: "2024-09-23T22:33:38Z" message: |- Could not evaluate exposure to update risk SRIOVFailedToConfigureVF (creating PromQL round-tripper: unable to load specified CA cert /etc/tls/service-ca/service-ca.crt: open /etc/tls/service-ca/service-ca.crt: no such file or directory) SRIOVFailedToConfigureVF description: OCP Versions 4.14.34, 4.15.25, 4.16.7 and ALL subsequent versions include kernel datastructure changes which are not compatible with older versions of the SR-IOV operator. Please update SR-IOV operator to versions dated 20240826 or newer before updating OCP. SRIOVFailedToConfigureVF URL: https://issues.redhat.com/browse/NHE-1171 reason: EvaluationFailed status: Unknown type: Recommended release: channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:fb321a3f50596b43704dbbed2e51fdefd7a7fd488ee99655d03784d0cd02283f url: https://access.redhat.com/errata/RHSA-2024:5107 version: 4.16.7 risks: - matchingRules: - promql: promql: | group(csv_succeeded{_id="d6d42268-7dff-4d37-92cf-691bd2d42f41", name=~"sriov-network-operator[.].*"}) or 0 * group(csv_count{_id="d6d42268-7dff-4d37-92cf-691bd2d42f41"}) type: PromQL message: OCP Versions 4.14.34, 4.15.25, 4.16.7 and ALL subsequent versions include kernel datastructure changes which are not compatible with older versions of the SR-IOV operator. Please update SR-IOV operator to versions dated 20240826 or newer before updating OCP. name: SRIOVFailedToConfigureVF url: https://issues.redhat.com/browse/NHE-1171 6.3. Updating the OpenShift Container Platform version in a hosted cluster Hosted control planes enables the decoupling of updates between the control plane and the data plane. As a cluster service provider or cluster administrator, you can manage the control plane and the data separately. You can update a control plane by modifying the HostedCluster custom resource (CR) and a node by modifying its NodePool CR. Both the HostedCluster and NodePool CRs specify an OpenShift Container Platform release image in a .release field. To keep your hosted cluster fully operational during an update process, the control plane and the node updates must follow the Kubernetes version skew policy . 6.3.1. The multicluster engine Operator hub management cluster The multicluster engine for Kubernetes Operator requires a specific OpenShift Container Platform version for the management cluster to remain in a supported state. You can install the multicluster engine Operator from OperatorHub in the OpenShift Container Platform web console. See the following support matrices for the multicluster engine Operator versions: multicluster engine Operator 2.7 multicluster engine Operator 2.6 multicluster engine Operator 2.5 multicluster engine Operator 2.4 The multicluster engine Operator supports the following OpenShift Container Platform versions: The latest unreleased version The latest released version Two versions before the latest released version You can also get the multicluster engine Operator version as a part of Red Hat Advanced Cluster Management (RHACM). 6.3.2. Supported OpenShift Container Platform versions in a hosted cluster When deploying a hosted cluster, the OpenShift Container Platform version of the management cluster does not affect the OpenShift Container Platform version of a hosted cluster. The HyperShift Operator creates the supported-versions ConfigMap in the hypershift namespace. The supported-versions ConfigMap describes the range of supported OpenShift Container Platform versions that you can deploy. See the following example of the supported-versions ConfigMap: apiVersion: v1 data: server-version: 2f6cfe21a0861dea3130f3bed0d3ae5553b8c28b supported-versions: '{"versions":["4.17","4.16","4.15","4.14"]}' kind: ConfigMap metadata: creationTimestamp: "2024-06-20T07:12:31Z" labels: hypershift.openshift.io/supported-versions: "true" name: supported-versions namespace: hypershift resourceVersion: "927029" uid: f6336f91-33d3-472d-b747-94abae725f70 Important To create a hosted cluster, you must use the OpenShift Container Platform version from the support version range. However, the multicluster engine Operator can manage only between n+1 and n-2 OpenShift Container Platform versions, where n defines the current minor version. You can check the multicluster engine Operator support matrix to ensure the hosted clusters managed by the multicluster engine Operator are within the supported OpenShift Container Platform range. To deploy a higher version of a hosted cluster on OpenShift Container Platform, you must update the multicluster engine Operator to a new minor version release to deploy a new version of the Hypershift Operator. Upgrading the multicluster engine Operator to a new patch, or z-stream, release does not update the HyperShift Operator to the version. See the following example output of the hcp version command that shows the supported OpenShift Container Platform versions for OpenShift Container Platform 4.16 in the management cluster: Client Version: openshift/hypershift: fe67b47fb60e483fe60e4755a02b3be393256343. Latest supported OCP: 4.17.0 Server Version: 05864f61f24a8517731664f8091cedcfc5f9b60d Server Supports OCP Versions: 4.17, 4.16, 4.15, 4.14 6.4. Updates for the hosted cluster The spec.release.image value dictates the version of the control plane. The HostedCluster object transmits the intended spec.release.image value to the HostedControlPlane.spec.releaseImage value and runs the appropriate Control Plane Operator version. The hosted control plane manages the rollout of the new version of the control plane components along with any OpenShift Container Platform components through the new version of the Cluster Version Operator (CVO). Important In hosted control planes, the NodeHealthCheck resource cannot detect the status of the CVO. A cluster administrator must manually pause the remediation triggered by NodeHealthCheck , before performing critical operations, such as updating the cluster, to prevent new remediation actions from interfering with cluster updates. To pause the remediation, enter the array of strings, for example, pause-test-cluster , as a value of the pauseRequests field in the NodeHealthCheck resource. For more information, see About the Node Health Check Operator . After the cluster update is complete, you can edit or delete the remediation. Navigate to the Compute NodeHealthCheck page, click your node health check, and then click Actions , which shows a drop-down list. 6.5. Updates for node pools With node pools, you can configure the software that is running in the nodes by exposing the spec.release and spec.config values. You can start a rolling node pool update in the following ways: Changing the spec.release or spec.config values. Changing any platform-specific field, such as the AWS instance type. The result is a set of new instances with the new type. Changing the cluster configuration, if the change propagates to the node. Node pools support replace updates and in-place updates. The nodepool.spec.release value dictates the version of any particular node pool. A NodePool object completes a replace or an in-place rolling update according to the .spec.management.upgradeType value. After you create a node pool, you cannot change the update type. If you want to change the update type, you must create a node pool and delete the other one. 6.5.1. Replace updates for node pools A replace update creates instances in the new version while it removes old instances from the version. This update type is effective in cloud environments where this level of immutability is cost effective. Replace updates do not preserve any manual changes because the node is entirely re-provisioned. 6.5.2. In place updates for node pools An in-place update directly updates the operating systems of the instances. This type is suitable for environments where the infrastructure constraints are higher, such as bare metal. In-place updates can preserve manual changes, but will report errors if you make manual changes to any file system or operating system configuration that the cluster directly manages, such as kubelet certificates. 6.6. Updating node pools in a hosted cluster You can update your version of OpenShift Container Platform by updating the node pools in your hosted cluster. The node pool version must not surpass the hosted control plane version. The .spec.release field in the NodePool custom resource (CR) shows the version of a node pool. Procedure Change the spec.release.image value in the node pool by entering the following command: USD oc patch nodepool <node_pool_name> -n <hosted_cluster_namespace> --type=merge -p '{"spec":{"nodeDrainTimeout":"60s","release":{"image":"<openshift_release_image>"}}}' 1 2 1 Replace <node_pool_name> and <hosted_cluster_namespace> with your node pool name and hosted cluster namespace, respectively. 2 The <openshift_release_image> variable specifies the new OpenShift Container Platform release image that you want to upgrade to, for example, quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 . Replace <4.y.z> with the supported OpenShift Container Platform version. Verification To verify that the new version was rolled out, check the .status.conditions value in the node pool by running the following command: USD oc get -n <hosted_cluster_namespace> nodepool <node_pool_name> -o yaml Example output status: conditions: - lastTransitionTime: "2024-05-20T15:00:40Z" message: 'Using release image: quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64' 1 reason: AsExpected status: "True" type: ValidReleaseImage 1 Replace <4.y.z> with the supported OpenShift Container Platform version. 6.7. Updating a control plane in a hosted cluster On hosted control planes, you can upgrade your version of OpenShift Container Platform by updating the hosted cluster. The .spec.release in the HostedCluster custom resource (CR) shows the version of the control plane. The HostedCluster updates the .spec.release field to the HostedControlPlane.spec.release and runs the appropriate Control Plane Operator version. The HostedControlPlane resource orchestrates the rollout of the new version of the control plane components along with the OpenShift Container Platform component in the data plane through the new version of the Cluster Version Operator (CVO). The HostedControlPlane includes the following artifacts: CVO Cluster Network Operator (CNO) Cluster Ingress Operator Manifests for the Kube API server, scheduler, and manager Machine approver Autoscaler Infrastructure resources to enable ingress for control plane endpoints such as the Kube API server, ignition, and konnectivity You can set the .spec.release field in the HostedCluster CR to update the control plane by using the information from the status.version.availableUpdates and status.version.conditionalUpdates fields. Procedure Add the hypershift.openshift.io/force-upgrade-to=<openshift_release_image> annotation to the hosted cluster by entering the following command: USD oc annotate hostedcluster -n <hosted_cluster_namespace> <hosted_cluster_name> "hypershift.openshift.io/force-upgrade-to=<openshift_release_image>" --overwrite 1 2 1 Replace <hosted_cluster_name> and <hosted_cluster_namespace> with your hosted cluster name and hosted cluster namespace, respectively. 2 The <openshift_release_image> variable specifies the new OpenShift Container Platform release image that you want to upgrade to, for example, quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 . Replace <4.y.z> with the supported OpenShift Container Platform version. Change the spec.release.image value in the hosted cluster by entering the following command: USD oc patch hostedcluster <hosted_cluster_name> -n <hosted_cluster_namespace> --type=merge -p '{"spec":{"release":{"image":"<openshift_release_image>"}}}' Verification To verify that the new version was rolled out, check the .status.conditions and .status.version values in the hosted cluster by running the following command: USD oc get -n <hosted_cluster_namespace> hostedcluster <hosted_cluster_name> -o yaml Example output status: conditions: - lastTransitionTime: "2024-05-20T15:01:01Z" message: Payload loaded version="4.y.z" image="quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64" 1 status: "True" type: ClusterVersionReleaseAccepted #... version: availableUpdates: null desired: image: quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 2 version: 4.y.z 1 2 Replace <4.y.z> with the supported OpenShift Container Platform version. 6.8. Updating a hosted cluster by using the multicluster engine Operator console You can update your hosted cluster by using the multicluster engine Operator console. Important Before updating a hosted cluster, you must refer to the available and conditional updates of a hosted cluster. Choosing a wrong release version might break the hosted cluster. Procedure Select All clusters . Navigate to Infrastructure Clusters to view managed hosted clusters. Click the Upgrade available link to update the control plane and node pools. 6.9. Limitations of managing imported hosted clusters Hosted clusters are automatically imported into the local multicluster engine for Kubernetes Operator, unlike a standalone OpenShift Container Platform or third party clusters. Hosted clusters run some of their agents in the hosted mode so that the agents do not use the resources of your cluster. If you choose to automatically import hosted clusters, you can update node pools and the control plane in hosted clusters by using the HostedCluster resource on the management cluster. To update node pools and a control plane, see "Updating node pools in a hosted cluster" and "Updating a control plane in a hosted cluster". You can import hosted clusters into a location other than the local multicluster engine Operator by using the Red Hat Advanced Cluster Management (RHACM). For more information, see "Discovering multicluster engine for Kubernetes Operator hosted clusters in Red Hat Advanced Cluster Management". In this topology, you must update your hosted clusters by using the command-line interface or the console of the local multicluster engine for Kubernetes Operator where the cluster is hosted. You cannot update the hosted clusters through the RHACM hub cluster. Additional resources Updating node pools in a hosted cluster Updating a control plane in a hosted cluster Discovering multicluster engine for Kubernetes Operator hosted clusters in Red Hat Advanced Cluster Management
|
[
"spec: autoscaling: {} channel: stable-4.y 1 clusterID: d6d42268-7dff-4d37-92cf-691bd2d42f41 configuration: {} controllerAvailabilityPolicy: SingleReplica dns: baseDomain: dev11.red-chesterfield.com privateZoneID: Z0180092I0DQRKL55LN0 publicZoneID: Z00206462VG6ZP0H2QLWK",
"oc get -n <hosted_cluster_namespace> hostedcluster <hosted_cluster_name> -o yaml",
"version: availableUpdates: - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:b7517d13514c6308ae16c5fd8108133754eb922cd37403ed27c846c129e67a9a url: https://access.redhat.com/errata/RHBA-2024:6401 version: 4.16.11 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:d08e7c8374142c239a07d7b27d1170eae2b0d9f00ccf074c3f13228a1761c162 url: https://access.redhat.com/errata/RHSA-2024:6004 version: 4.16.10 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:6a80ac72a60635a313ae511f0959cc267a21a89c7654f1c15ee16657aafa41a0 url: https://access.redhat.com/errata/RHBA-2024:5757 version: 4.16.9 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:ea624ae7d91d3f15094e9e15037244679678bdc89e5a29834b2ddb7e1d9b57e6 url: https://access.redhat.com/errata/RHSA-2024:5422 version: 4.16.8 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:e4102eb226130117a0775a83769fe8edb029f0a17b6cbca98a682e3f1225d6b7 url: https://access.redhat.com/errata/RHSA-2024:4965 version: 4.16.6 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:f828eda3eaac179e9463ec7b1ed6baeba2cd5bd3f1dd56655796c86260db819b url: https://access.redhat.com/errata/RHBA-2024:4855 version: 4.16.5 conditionalUpdates: - conditions: - lastTransitionTime: \"2024-09-23T22:33:38Z\" message: |- Could not evaluate exposure to update risk SRIOVFailedToConfigureVF (creating PromQL round-tripper: unable to load specified CA cert /etc/tls/service-ca/service-ca.crt: open /etc/tls/service-ca/service-ca.crt: no such file or directory) SRIOVFailedToConfigureVF description: OCP Versions 4.14.34, 4.15.25, 4.16.7 and ALL subsequent versions include kernel datastructure changes which are not compatible with older versions of the SR-IOV operator. Please update SR-IOV operator to versions dated 20240826 or newer before updating OCP. SRIOVFailedToConfigureVF URL: https://issues.redhat.com/browse/NHE-1171 reason: EvaluationFailed status: Unknown type: Recommended release: channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:fb321a3f50596b43704dbbed2e51fdefd7a7fd488ee99655d03784d0cd02283f url: https://access.redhat.com/errata/RHSA-2024:5107 version: 4.16.7 risks: - matchingRules: - promql: promql: | group(csv_succeeded{_id=\"d6d42268-7dff-4d37-92cf-691bd2d42f41\", name=~\"sriov-network-operator[.].*\"}) or 0 * group(csv_count{_id=\"d6d42268-7dff-4d37-92cf-691bd2d42f41\"}) type: PromQL message: OCP Versions 4.14.34, 4.15.25, 4.16.7 and ALL subsequent versions include kernel datastructure changes which are not compatible with older versions of the SR-IOV operator. Please update SR-IOV operator to versions dated 20240826 or newer before updating OCP. name: SRIOVFailedToConfigureVF url: https://issues.redhat.com/browse/NHE-1171",
"apiVersion: v1 data: server-version: 2f6cfe21a0861dea3130f3bed0d3ae5553b8c28b supported-versions: '{\"versions\":[\"4.17\",\"4.16\",\"4.15\",\"4.14\"]}' kind: ConfigMap metadata: creationTimestamp: \"2024-06-20T07:12:31Z\" labels: hypershift.openshift.io/supported-versions: \"true\" name: supported-versions namespace: hypershift resourceVersion: \"927029\" uid: f6336f91-33d3-472d-b747-94abae725f70",
"Client Version: openshift/hypershift: fe67b47fb60e483fe60e4755a02b3be393256343. Latest supported OCP: 4.17.0 Server Version: 05864f61f24a8517731664f8091cedcfc5f9b60d Server Supports OCP Versions: 4.17, 4.16, 4.15, 4.14",
"oc patch nodepool <node_pool_name> -n <hosted_cluster_namespace> --type=merge -p '{\"spec\":{\"nodeDrainTimeout\":\"60s\",\"release\":{\"image\":\"<openshift_release_image>\"}}}' 1 2",
"oc get -n <hosted_cluster_namespace> nodepool <node_pool_name> -o yaml",
"status: conditions: - lastTransitionTime: \"2024-05-20T15:00:40Z\" message: 'Using release image: quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64' 1 reason: AsExpected status: \"True\" type: ValidReleaseImage",
"oc annotate hostedcluster -n <hosted_cluster_namespace> <hosted_cluster_name> \"hypershift.openshift.io/force-upgrade-to=<openshift_release_image>\" --overwrite 1 2",
"oc patch hostedcluster <hosted_cluster_name> -n <hosted_cluster_namespace> --type=merge -p '{\"spec\":{\"release\":{\"image\":\"<openshift_release_image>\"}}}'",
"oc get -n <hosted_cluster_namespace> hostedcluster <hosted_cluster_name> -o yaml",
"status: conditions: - lastTransitionTime: \"2024-05-20T15:01:01Z\" message: Payload loaded version=\"4.y.z\" image=\"quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64\" 1 status: \"True\" type: ClusterVersionReleaseAccepted # version: availableUpdates: null desired: image: quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 2 version: 4.y.z"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/hosted_control_planes/updating-hosted-control-planes
|
6.2. Annotating Objects for Marshalling Using @SerializeWith
|
6.2. Annotating Objects for Marshalling Using @SerializeWith Objects can be marshalled by providing an Externalizer implementation for the type that needs to be marshalled or unmarshalled, then annotating the marshalled type class with @SerializeWith indicating the Externalizer class to use. Example 6.1. Using the @SerializeWith Annotation In the provided example, the object has been defined as marshallable due to the @SerializeWith annotation. JBoss Marshalling will therefore marshall the object using the Externalizer class passed. This method of defining externalizers is user friendly, however it has the following disadvantages: The payload sizes generated using this method are not the most efficient. This is due to some constraints in the model, such as support for different versions of the same class, or the need to marshall the Externalizer class. This model requires the marshalled class to be annotated with @SerializeWith , however an Externalizer may need to be provided for a class for which source code is not available, or for any other constraints, it cannot be modified. Annotations used in this model may be limiting for framework developers or service providers that attempt to abstract lower level details, such as the marshalling layer, away from the user. Advanced Externalizers are available for users affected by these disadvantages. Note To make Externalizer implementations easier to code and more typesafe, define type <t> as the type of object that is being marshalled or unmarshalled. Report a bug
|
[
"import org.infinispan.commons.marshall.Externalizer; import org.infinispan.commons.marshall.SerializeWith; @SerializeWith(Person.PersonExternalizer.class) public class Person { final String name; final int age; public Person(String name, int age) { this.name = name; this.age = age; } public static class PersonExternalizer implements Externalizer<Person> { @Override public void writeObject(ObjectOutput output, Person person) throws IOException { output.writeObject(person.name); output.writeInt(person.age); } @Override public Person readObject(ObjectInput input) throws IOException, ClassNotFoundException { return new Person((String) input.readObject(), input.readInt()); } } }"
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/annotating_objects_for_marshalling_using_serializewith
|
Chapter 7. Using the Kafka Bridge with 3scale
|
Chapter 7. Using the Kafka Bridge with 3scale You can deploy and integrate Red Hat 3scale API Management with the AMQ Streams Kafka Bridge. 7.1. Using the Kafka Bridge with 3scale With a plain deployment of the Kafka Bridge, there is no provision for authentication or authorization, and no support for a TLS encrypted connection to external clients. 3scale can secure the Kafka Bridge with TLS, and provide authentication and authorization. Integration with 3scale also means that additional features like metrics, rate limiting and billing are available. With 3scale, you can use different types of authentication for requests from external clients wishing to access AMQ Streams. 3scale supports the following types of authentication: Standard API Keys Single randomized strings or hashes acting as an identifier and a secret token. Application Identifier and Key pairs Immutable identifier and mutable secret key strings. OpenID Connect Protocol for delegated authentication. Using an existing 3scale deployment? If you already have 3scale deployed to OpenShift and you wish to use it with the Kafka Bridge, ensure that you have the correct setup. Setup is described in Section 7.2, "Deploying 3scale for the Kafka Bridge" . 7.1.1. Kafka Bridge service discovery 3scale is integrated using service discovery, which requires that 3scale is deployed to the same OpenShift cluster as AMQ Streams and the Kafka Bridge. Your AMQ Streams Cluster Operator deployment must have the following environment variables set: STRIMZI_CUSTOM_KAFKA_BRIDGE_SERVICE_LABELS STRIMZI_CUSTOM_KAFKA_BRIDGE_SERVICE_ANNOTATIONS When the Kafka Bridge is deployed, the service that exposes the REST interface of the Kafka Bridge uses the annotations and labels for discovery by 3scale. A discovery.3scale.net=true label is used by 3scale to find a service. Annotations provide information about the service. You can check your configuration in the OpenShift console by navigating to Services for the Kafka Bridge instance. Under Annotations you will see the endpoint to the OpenAPI specification for the Kafka Bridge. 7.1.2. 3scale APIcast gateway policies 3scale is used in conjunction with 3scale APIcast, an API gateway deployed with 3scale that provides a single point of entry for the Kafka Bridge. APIcast policies provide a mechanism to customize how the gateway operates. 3scale provides a set of standard policies for gateway configuration. You can also create your own policies. For more information on APIcast policies, see Administering the API Gateway in the 3scale documentation. APIcast policies for the Kafka Bridge A sample policy configuration for 3scale integration with the Kafka Bridge is provided with the policies_config.json file, which defines: Anonymous access Header modification Routing URL rewriting Gateway policies are enabled or disabled through this file. You can use this sample as a starting point for defining your own policies. Anonymous access The anonymous access policy exposes a service without authentication, providing default credentials (for anonymous access) when a HTTP client does not provide them. The policy is not mandatory and can be disabled or removed if authentication is always needed. Header modification The header modification policy allows existing HTTP headers to be modified, or new headers added to requests or responses passing through the gateway. For 3scale integration, the policy adds headers to every request passing through the gateway from a HTTP client to the Kafka Bridge. When the Kafka Bridge receives a request for creating a new consumer, it returns a JSON payload containing a base_uri field with the URI that the consumer must use for all the subsequent requests. For example: { "instance_id": "consumer-1", "base_uri":"http://my-bridge:8080/consumers/my-group/instances/consumer1" } When using APIcast, clients send all subsequent requests to the gateway and not to the Kafka Bridge directly. So the URI requires the gateway hostname, not the address of the Kafka Bridge behind the gateway. Using header modification policies, headers are added to requests from the HTTP client so that the Kafka Bridge uses the gateway hostname. For example, by applying a Forwarded: host=my-gateway:80;proto=http header, the Kafka Bridge delivers the following to the consumer. { "instance_id": "consumer-1", "base_uri":"http://my-gateway:80/consumers/my-group/instances/consumer1" } An X-Forwarded-Path header carries the original path contained in a request from the client to the gateway. This header is strictly related to the routing policy applied when a gateway supports more than one Kafka Bridge instance. Routing A routing policy is applied when there is more than one Kafka Bridge instance. Requests must be sent to the same Kafka Bridge instance where the consumer was initially created, so a request must specify a route for the gateway to forward a request to the appropriate Kafka Bridge instance. A routing policy names each bridge instance, and routing is performed using the name. You specify the name in the KafkaBridge custom resource when you deploy the Kafka Bridge. For example, each request (using X-Forwarded-Path ) from a consumer to: http://my-gateway:80/my-bridge-1/consumers/my-group/instances/consumer1 is forwarded to: http://my-bridge-1-bridge-service:8080/consumers/my-group/instances/consumer1 URL rewriting policy removes the bridge name, as it is not used when forwarding the request from the gateway to the Kafka Bridge. URL rewriting The URL rewiring policy ensures that a request to a specific Kafka Bridge instance from a client does not contain the bridge name when forwarding the request from the gateway to the Kafka Bridge. The bridge name is not used in the endpoints exposed by the bridge. 7.1.3. TLS validation You can set up APIcast for TLS validation, which requires a self-managed deployment of APIcast using a template. The apicast service is exposed as a route. You can also apply a TLS policy to the Kafka Bridge API. For more information on TLS configuration, see Administering the API Gateway in the 3scale documentation. 7.1.4. 3scale documentation The procedure to deploy 3scale for use with the Kafka Bridge assumes some understanding of 3scale. For more information, refer to the 3scale product documentation: Product Documentation for Red Hat 3scale API Management 7.2. Deploying 3scale for the Kafka Bridge In order to use 3scale with the Kafka Bridge, you first deploy it and then configure it to discover the Kafka Bridge API. You will also use 3scale APIcast and 3scale toolbox. APIcast is provided by 3scale as an NGINX-based API gateway for HTTP clients to connect to the Kafka Bridge API service. 3scale toolbox is a configuration tool that is used to import the OpenAPI specification for the Kafka Bridge service to 3scale. In this scenario, you run AMQ Streams, Kafka, the Kafka Bridge and 3scale/APIcast in the same OpenShift cluster. Note If you already have 3scale deployed in the same cluster as the Kafka Bridge, you can skip the deployment steps and use your current deployment. Prerequisites AMQ Streams and Kafka is running The Kafka Bridge is deployed For the 3scale deployment: Check the Red Hat 3scale API Management supported configurations . Installation requires a user with cluster-admin role, such as system:admin . You need access to the JSON files describing the: Kafka Bridge OpenAPI specification ( openapiv2.json ) Header modification and routing policies for the Kafka Bridge ( policies_config.json ) Find the JSON files on GitHub . Procedure Deploy 3scale API Management to the OpenShift cluster. Create a new project or use an existing project. oc new-project my-project \ --description=" description " --display-name=" display_name " Deploy 3scale. Use the information provided in the Installing 3scale guide to deploy 3scale on OpenShift using a template or operator. Whichever approach you use, make sure that you set the WILDCARD_DOMAIN parameter to the domain of your OpenShift cluster. Make a note of the URLS and credentials presented for accessing the 3scale Admin Portal. Grant authorization for 3scale to discover the Kafka Bridge service: oc adm policy add-cluster-role-to-user view system:serviceaccount: my-project :amp Verify that 3scale was successfully deployed to the Openshift cluster from the OpenShift console or CLI. For example: oc get deployment 3scale-operator Set up 3scale toolbox. Use the information provided in the Operating 3scale guide to install 3scale toolbox. Set environment variables to be able to interact with 3scale: export REMOTE_NAME=strimzi-kafka-bridge 1 export SYSTEM_NAME=strimzi_http_bridge_for_apache_kafka 2 export TENANT=strimzi-kafka-bridge-admin 3 export PORTAL_ENDPOINT=USDTENANT.3scale.net 4 export TOKEN= 3scale access token 5 1 REMOTE_NAME is the name assigned to the remote address of the 3scale Admin Portal. 2 SYSTEM_NAME is the name of the 3scale service/API created by importing the OpenAPI specification through the 3scale toolbox. 3 TENANT is the tenant name of the 3scale Admin Portal (that is, https://USDTENANT.3scale.net ). 4 PORTAL_ENDPOINT is the endpoint running the 3scale Admin Portal. 5 TOKEN is the access token provided by the 3scale Admin Portal for interaction through the 3scale toolbox or HTTP requests. Configure the remote web address of the 3scale toolbox: 3scale remote add USDREMOTE_NAME https://USDTOKEN@USDPORTAL_ENDPOINT/ Now the endpoint address of the 3scale Admin portal does not need to be specified every time you run the toolbox. Check that your Cluster Operator deployment has the labels and annotations properties required for the Kafka Bridge service to be discovered by 3scale. #... env: - name: STRIMZI_CUSTOM_KAFKA_BRIDGE_SERVICE_LABELS value: | discovery.3scale.net=true - name: STRIMZI_CUSTOM_KAFKA_BRIDGE_SERVICE_ANNOTATIONS value: | discovery.3scale.net/scheme=http discovery.3scale.net/port=8080 discovery.3scale.net/path=/ discovery.3scale.net/description-path=/openapi #... If not, add the properties through the OpenShift console or try redeploying the Cluster Operator and the Kafka Bridge . Discover the Kafka Bridge API service through 3scale. Log in to the 3scale Admin portal using the credentials provided when 3scale was deployed. From the 3scale Admin Portal, navigate to New API Import from OpenShift where you will see the Kafka Bridge service. Click Create Service . You may need to refresh the page to see the Kafka Bridge service. Now you need to import the configuration for the service. You do this from an editor, but keep the portal open to check the imports are successful. Edit the Host field in the OpenAPI specification (JSON file) to use the base URL of the Kafka Bridge service: For example: "host": "my-bridge-bridge-service.my-project.svc.cluster.local:8080" Check the host URL includes the correct: Kafka Bridge name ( my-bridge ) Project name ( my-project ) Port for the Kafka Bridge ( 8080 ) Import the updated OpenAPI specification using the 3scale toolbox: 3scale import openapi -k -d USDREMOTE_NAME openapiv2.json -t myproject-my-bridge-bridge-service Import the header modification and routing policies for the service (JSON file). Locate the ID for the service you created in 3scale. Here we use the `jq` utility : export SERVICE_ID=USD(curl -k -s -X GET "https://USDPORTAL_ENDPOINT/admin/api/services.json?access_token=USDTOKEN" | jq ".services[] | select(.service.system_name | contains(\"USDSYSTEM_NAME\")) | .service.id") You need the ID when importing the policies. Import the policies: curl -k -X PUT "https://USDPORTAL_ENDPOINT/admin/api/services/USDSERVICE_ID/proxy/policies.json" --data "access_token=USDTOKEN" --data-urlencode policies_config@policies_config.json From the 3scale Admin Portal, navigate to Integration Configuration to check that the endpoints and policies for the Kafka Bridge service have loaded. Navigate to Applications Create Application Plan to create an application plan. Navigate to Audience Developer Applications Create Application to create an application. The application is required in order to obtain a user key for authentication. (Production environment step) To make the API available to the production gateway, promote the configuration: 3scale proxy-config promote USDREMOTE_NAME USDSERVICE_ID Use an API testing tool to verify you can access the Kafka Bridge through the APIcast gateway using a call to create a consumer, and the user key created for the application. For example: https//my-project-my-bridge-bridge-service-3scale-apicast-staging.example.com:443/consumers/my-group?user_key=3dfc188650101010ecd7fdc56098ce95 If a payload is returned from the Kafka Bridge, the consumer was created successfully. { "instance_id": "consumer1", "base uri": "https//my-project-my-bridge-bridge-service-3scale-apicast-staging.example.com:443/consumers/my-group/instances/consumer1" } The base URI is the address that the client will use in subsequent requests.
|
[
"{ \"instance_id\": \"consumer-1\", \"base_uri\":\"http://my-bridge:8080/consumers/my-group/instances/consumer1\" }",
"{ \"instance_id\": \"consumer-1\", \"base_uri\":\"http://my-gateway:80/consumers/my-group/instances/consumer1\" }",
"new-project my-project --description=\" description \" --display-name=\" display_name \"",
"adm policy add-cluster-role-to-user view system:serviceaccount: my-project :amp",
"get deployment 3scale-operator",
"export REMOTE_NAME=strimzi-kafka-bridge 1 export SYSTEM_NAME=strimzi_http_bridge_for_apache_kafka 2 export TENANT=strimzi-kafka-bridge-admin 3 export PORTAL_ENDPOINT=USDTENANT.3scale.net 4 export TOKEN= 3scale access token 5",
"3scale remote add USDREMOTE_NAME https://USDTOKEN@USDPORTAL_ENDPOINT/",
"# env: - name: STRIMZI_CUSTOM_KAFKA_BRIDGE_SERVICE_LABELS value: | discovery.3scale.net=true - name: STRIMZI_CUSTOM_KAFKA_BRIDGE_SERVICE_ANNOTATIONS value: | discovery.3scale.net/scheme=http discovery.3scale.net/port=8080 discovery.3scale.net/path=/ discovery.3scale.net/description-path=/openapi #",
"\"host\": \"my-bridge-bridge-service.my-project.svc.cluster.local:8080\"",
"3scale import openapi -k -d USDREMOTE_NAME openapiv2.json -t myproject-my-bridge-bridge-service",
"export SERVICE_ID=USD(curl -k -s -X GET \"https://USDPORTAL_ENDPOINT/admin/api/services.json?access_token=USDTOKEN\" | jq \".services[] | select(.service.system_name | contains(\\\"USDSYSTEM_NAME\\\")) | .service.id\")",
"curl -k -X PUT \"https://USDPORTAL_ENDPOINT/admin/api/services/USDSERVICE_ID/proxy/policies.json\" --data \"access_token=USDTOKEN\" --data-urlencode policies_config@policies_config.json",
"3scale proxy-config promote USDREMOTE_NAME USDSERVICE_ID",
"https//my-project-my-bridge-bridge-service-3scale-apicast-staging.example.com:443/consumers/my-group?user_key=3dfc188650101010ecd7fdc56098ce95",
"{ \"instance_id\": \"consumer1\", \"base uri\": \"https//my-project-my-bridge-bridge-service-3scale-apicast-staging.example.com:443/consumers/my-group/instances/consumer1\" }"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/using_amq_streams_on_openshift/kafka-bridge-3-scale-str
|
11.2. Adding Swap Space
|
11.2. Adding Swap Space Sometimes it is necessary to add more swap space after installation. For example, you may upgrade the amount of RAM in your system from 128 MB to 256 MB, but there is only 256 MB of swap space. It might be advantageous to increase the amount of swap space to 512 MB if you perform memory-intense operations or run applications that require a large amount of memory. You have three options: create a new swap partition, create a new swap file, or extend swap on an existing LVM2 logical volume. It is recommended that you extend an existing logical volume. 11.2.1. Extending Swap on an LVM2 Logical Volume To extend an LVM2 swap logical volume (assuming /dev/VolGroup00/LogVol01 is the volume you want to extend): Disable swapping for the associated logical volume: Resize the LVM2 logical volume by 256 MB: Format the new swap space: Enable the extended logical volume: Test that the logical volume has been extended properly:
|
[
"swapoff -v /dev/VolGroup00/LogVol01",
"lvm lvresize /dev/VolGroup00/LogVol01 -L +256M",
"mkswap /dev/VolGroup00/LogVol01",
"swapon -va",
"cat /proc/swaps # free"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/swap_space-adding_swap_space
|
Data Grid downloads
|
Data Grid downloads Access the Data Grid Software Downloads on the Red Hat customer portal. Note You must have a Red Hat account to access and download Data Grid software.
| null |
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_operator_guide/rhdg-downloads_datagrid
|
Chapter 5. Using Firewalls
|
Chapter 5. Using Firewalls 5.1. Getting Started with firewalld A firewall is a way to protect machines from any unwanted traffic from outside. It enables users to control incoming network traffic on host machines by defining a set of firewall rules . These rules are used to sort the incoming traffic and either block it or allow through. firewalld is a firewall service daemon that provides a dynamic customizable host-based firewall with a D-Bus interface. Being dynamic, it enables creating, changing, and deleting the rules without the necessity to restart the firewall daemon each time the rules are changed. firewalld uses the concepts of zones and services , that simplify the traffic management. Zones are predefined sets of rules. Network interfaces and sources can be assigned to a zone. The traffic allowed depends on the network your computer is connected to and the security level this network is assigned. Firewall services are predefined rules that cover all necessary settings to allow incoming traffic for a specific service and they apply within a zone. Services use one or more ports or addresses for network communication. Firewalls filter communication based on ports. To allow network traffic for a service, its ports must be open . firewalld blocks all traffic on ports that are not explicitly set as open. Some zones, such as trusted , allow all traffic by default. Figure 5.1. The Firewall Stack 5.1.1. Zones firewalld can be used to separate networks into different zones according to the level of trust that the user has decided to place on the interfaces and traffic within that network. A connection can only be part of one zone, but a zone can be used for many network connections. NetworkManager notifies firewalld of the zone of an interface. You can assign zones to interfaces with NetworkManager , with the firewall-config tool, or the firewall-cmd command-line tool. The latter two only edit the appropriate NetworkManager configuration files. If you change the zone of the interface using firewall-cmd or firewall-config , the request is forwarded to NetworkManager and is not handled by firewalld . The predefined zones are stored in the /usr/lib/firewalld/zones/ directory and can be instantly applied to any available network interface. These files are copied to the /etc/firewalld/zones/ directory only after they are modified. The following table describes the default settings of the predefined zones: block Any incoming network connections are rejected with an icmp-host-prohibited message for IPv4 and icmp6-adm-prohibited for IPv6 . Only network connections initiated from within the system are possible. dmz For computers in your demilitarized zone that are publicly-accessible with limited access to your internal network. Only selected incoming connections are accepted. drop Any incoming network packets are dropped without any notification. Only outgoing network connections are possible. external For use on external networks with masquerading enabled, especially for routers. You do not trust the other computers on the network to not harm your computer. Only selected incoming connections are accepted. home For use at home when you mostly trust the other computers on the network. Only selected incoming connections are accepted. internal For use on internal networks when you mostly trust the other computers on the network. Only selected incoming connections are accepted. public For use in public areas where you do not trust other computers on the network. Only selected incoming connections are accepted. trusted All network connections are accepted. work For use at work where you mostly trust the other computers on the network. Only selected incoming connections are accepted. One of these zones is set as the default zone. When interface connections are added to NetworkManager , they are assigned to the default zone. On installation, the default zone in firewalld is set to be the public zone. The default zone can be changed. Note The network zone names have been chosen to be self-explanatory and to allow users to quickly make a reasonable decision. To avoid any security problems, review the default zone configuration and disable any unnecessary services according to your needs and risk assessments. 5.1.2. Predefined Services A service can be a list of local ports, protocols, source ports, and destinations, as well as a list of firewall helper modules automatically loaded if a service is enabled. Using services saves users time because they can achieve several tasks, such as opening ports, defining protocols, enabling packet forwarding and more, in a single step, rather than setting up everything one after another. Service configuration options and generic file information are described in the firewalld.service(5) man page. The services are specified by means of individual XML configuration files, which are named in the following format: service-name .xml . Protocol names are preferred over service or application names in firewalld . 5.1.3. Runtime and Permanent Settings Any changes committed in runtime mode only apply while firewalld is running. When firewalld is restarted, the settings revert to their permanent values. To make the changes persistent across reboots, apply them again using the --permanent option. Alternatively, to make changes persistent while firewalld is running, use the --runtime-to-permanent firewall-cmd option. If you set the rules while firewalld is running using only the --permanent option, they do not become effective before firewalld is restarted. However, restarting firewalld closes all open ports and stops the networking traffic. 5.1.4. Modifying Settings in Runtime and Permanent Configuration using CLI Using the CLI, you do not modify the firewall settings in both modes at the same time. You only modify either runtime or permanent mode. To modify the firewall settings in the permanent mode, use the --permanent option with the firewall-cmd command. Without this option, the command modifies runtime mode. To change settings in both modes, you can use two methods: Change runtime settings and then make them permanent as follows: Set permanent settings and reload the settings into runtime mode: The first method allows you to test the settings before you apply them to the permanent mode. Note It is possible, especially on remote systems, that an incorrect setting results in a user locking themselves out of a machine. To prevent such situations, use the --timeout option. After a specified amount of time, any change reverts to its state. Using this options excludes the --permanent option. For example, to add the SSH service for 15 minutes:
|
[
"~]# firewall-cmd --permanent <other options>",
"~]# firewall-cmd <other options> ~]# firewall-cmd --runtime-to-permanent",
"~]# firewall-cmd --permanent <other options> ~]# firewall-cmd --reload",
"~]# firewall-cmd --add-service=ssh --timeout 15m"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-Using_Firewalls
|
Chapter 11. Triggering Scripts for Cluster Events
|
Chapter 11. Triggering Scripts for Cluster Events A Pacemaker cluster is an event-driven system, where an event might be a resource or node failure, a configuration change, or a resource starting or stopping. You can configure Pacemaker cluster alerts to take some external action when a cluster event occurs. You can configure cluster alerts in one of two ways: As of Red Hat Enterprise Linux 6.9, you can configure Pacemaker alerts by means of alert agents, which are external programs that the cluster calls in the same manner as the cluster calls resource agents to handle resource configuration and operation. This is the preferred, simpler method of configuring cluster alerts. Pacemaker alert agents are described in Section 11.1, "Pacemaker Alert Agents (Red Hat Enterprise Linux 6.9 and later)" . The ocf:pacemaker:ClusterMon resource can monitor the cluster status and trigger alerts on each cluster event. This resource runs the crm_mon command in the background at regular intervals. For information on the ClusterMon resource see Section 11.2, "Event Notification with Monitoring Resources" . 11.1. Pacemaker Alert Agents (Red Hat Enterprise Linux 6.9 and later) You can create Pacemaker alert agents to take some external action when a cluster event occurs. The cluster passes information about the event to the agent by means of environment variables. Agents can do anything desired with this information, such as send an email message or log to a file or update a monitoring system. Pacemaker provides several sample alert agents, which are installed in /usr/share/pacemaker/alerts by default. These sample scripts may be copied and used as is, or they may be used as templates to be edited to suit your purposes. Refer to the source code of the sample agents for the full set of attributes they support. See Section 11.1.1, "Using the Sample Alert Agents" for an example of a basic procedure for configuring an alert that uses a sample alert agent. General information on configuring and administering alert agents is provided in Section 11.1.2, "Alert Creation" , Section 11.1.3, "Displaying, Modifying, and Removing Alerts" , Section 11.1.4, "Alert Recipients" , Section 11.1.5, "Alert Meta Options" , and Section 11.1.6, "Alert Configuration Command Examples" . You can write your own alert agents for a Pacemaker alert to call. For information on writing alert agents, see Section 11.1.7, "Writing an Alert Agent" . 11.1.1. Using the Sample Alert Agents When you use one of sample alert agents, you should review the script to ensure that it suits your needs. These sample agents are provided as a starting point for custom scripts for specific cluster environments. To use one of the sample alert agents, you must install the agent on each node in the cluster. For example, the following command installs the alert_snmp.sh.sample script as alert_snmp.sh . After you have installed the script, you can create an alert that uses the script. The following example configures an alert that uses the installed alert_snmp.sh alert agent to send cluster events as SNMP traps. By default, the script will send all events except successful monitor calls to the SNMP server. This example configures the timestamp format as a meta option. For information about meta options, see Section 11.1.5, "Alert Meta Options" . After configuring the alert, this example configures a recipient for the alert and displays the alert configuration. The following example installs the alert_smtp.sh agent and then configures an alert that uses the installed alert agent to send cluster events as email messages. After configuring the alert, this example configures a recipient and displays the alert configuration. For more information on the format of the pcs alert create and pcs alert recipient add commands, see Section 11.1.2, "Alert Creation" and Section 11.1.4, "Alert Recipients" . 11.1.2. Alert Creation The following command creates a cluster alert. The options that you configure are agent-specific configuration values that are passed to the alert agent script at the path you specify as additional environment variables. If you do not specify a value for id , one will be generated. For information on alert meta options, Section 11.1.5, "Alert Meta Options" . Multiple alert agents may be configured; the cluster will call all of them for each event. Alert agents will be called only on cluster nodes. They will be called for events involving Pacemaker Remote nodes, but they will never be called on those nodes. The following example creates a simple alert that will call my-script.sh for each event. For an example that shows how to create a cluster alert that uses one of the sample alert agents, see Section 11.1.1, "Using the Sample Alert Agents" . 11.1.3. Displaying, Modifying, and Removing Alerts The following command shows all configured alerts along with the values of the configured options. The following command updates an existing alert with the specified alert-id value. The following command removes an alert with the specified alert-id value. 11.1.4. Alert Recipients Usually alerts are directed towards a recipient. Thus each alert may be additionally configured with one or more recipients. The cluster will call the agent separately for each recipient. The recipient may be anything the alert agent can recognize: an IP address, an email address, a file name, or whatever the particular agent supports. The following command adds a new recipient to the specified alert. The following command updates an existing alert recipient. The following command removes the specified alert recipient. The following example command adds the alert recipient my-alert-recipient with a recipient ID of my-recipient-id to the alert my-alert . This will configure the cluster to call the alert script that has been configured for my-alert for each event, passing the recipient some-address as an environment variable. 11.1.5. Alert Meta Options As with resource agents, meta options can be configured for alert agents to affect how Pacemaker calls them. Table 11.1, "Alert Meta Options" describes the alert meta options. Meta options can be configured per alert agent as well as per recipient. Table 11.1. Alert Meta Options Meta-Attribute Default Description timestamp-format %H:%M:%S.%06N Format the cluster will use when sending the event's timestamp to the agent. This is a string as used with the date (1) command. timeout 30s If the alert agent does not complete within this amount of time, it will be terminated. The following example configures an alert that calls the script my-script.sh and then adds two recipients to the alert. The first recipient has an ID of my-alert-recipient1 and the second recipient has an ID of my-alert-recipient2 . The script will get called twice for each event, with each call using a 15-second timeout. One call will be passed to the recipient [email protected] with a timestamp in the format %D %H:%M, while the other call will be passed to the recipient [email protected] with a timestamp in the format %c. ` 11.1.6. Alert Configuration Command Examples The following sequential examples show some basic alert configuration commands to show the format to use to create alerts, add recipients, and display the configured alerts. The following commands create a simple alert, add two recipients to the alert, and display the configured values. Since no alert ID value is specified, the system creates an alert ID value of alert . The first recipient creation command specifies a recipient of rec_value . Since this command does not specify a recipient ID, the value of alert-recipient is used as the recipient ID. The second recipient creation command specifies a recipient of rec_value2 . This command specifies a recipient ID of my-recipient for the recipient. This following commands add a second alert and a recipient for that alert. The alert ID for the second alert is my-alert and the recipient value is my-other-recipient . Since no recipient ID is specified, the system provides a recipient id of my-alert-recipient . The following commands modify the alert values for the alert my-alert and for the recipient my-alert-recipient . The following command removes the recipient my-alert-recipient from alert . The following command removes myalert from the configuration. 11.1.7. Writing an Alert Agent There are three types of Pacemaker alerts: node alerts, fencing alerts, and resource alerts. The environment variables that are passed to the alert agents can differ, depending on the type of alert. Table 11.2, "Environment Variables Passed to Alert Agents" describes the environment variables that are passed to alert agents and specifies when the environment variable is associated with a specific alert type. Table 11.2. Environment Variables Passed to Alert Agents Environment Variable Description CRM_alert_kind The type of alert (node, fencing, or resource) CRM_alert_version The version of Pacemaker sending the alert CRM_alert_recipient The configured recipient CRM_alert_node_sequence A sequence number increased whenever an alert is being issued on the local node, which can be used to reference the order in which alerts have been issued by Pacemaker. An alert for an event that happened later in time reliably has a higher sequence number than alerts for earlier events. Be aware that this number has no cluster-wide meaning. CRM_alert_timestamp A timestamp created prior to executing the agent, in the format specified by the timestamp-format meta option. This allows the agent to have a reliable, high-precision time of when the event occurred, regardless of when the agent itself was invoked (which could potentially be delayed due to system load or other circumstances). CRM_alert_node Name of affected node CRM_alert_desc Detail about event. For node alerts, this is the node's current state (member or lost). For fencing alerts, this is a summary of the requested fencing operation, including origin, target, and fencing operation error code, if any. For resource alerts, this is a readable string equivalent of CRM_alert_status . CRM_alert_nodeid ID of node whose status changed (provided with node alerts only) CRM_alert_task The requested fencing or resource operation (provided with fencing and resource alerts only) CRM_alert_rc The numerical return code of the fencing or resource operation (provided with fencing and resource alerts only) CRM_alert_rsc The name of the affected resource (resource alerts only) CRM_alert_interval The interval of the resource operation (resource alerts only) CRM_alert_target_rc The expected numerical return code of the operation (resource alerts only) CRM_alert_status A numerical code used by Pacemaker to represent the operation result (resource alerts only) When writing an alert agent, you must take the following concerns into account. Alert agents may be called with no recipient (if none is configured), so the agent must be able to handle this situation, even if it only exits in that case. Users may modify the configuration in stages, and add a recipient later. If more than one recipient is configured for an alert, the alert agent will be called once per recipient. If an agent is not able to run concurrently, it should be configured with only a single recipient. The agent is free, however, to interpret the recipient as a list. When a cluster event occurs, all alerts are fired off at the same time as separate processes. Depending on how many alerts and recipients are configured and on what is done within the alert agents, a significant load burst may occur. The agent could be written to take this into consideration, for example by queuing resource-intensive actions into some other instance, instead of directly executing them. Alert agents are run as the hacluster user, which has a minimal set of permissions. If an agent requires additional privileges, it is recommended to configure sudo to allow the agent to run the necessary commands as another user with the appropriate privileges. Take care to validate and sanitize user-configured parameters, such as CRM_alert_timestamp (whose content is specified by the user-configured timestamp-format ), CRM_alert_recipient , and all alert options. This is necessary to protect against configuration errors. In addition, if some user can modify the CIB without having hacluster -level access to the cluster nodes, this is a potential security concern as well, and you should avoid the possibility of code injection. If a cluster contains resources for which the onfail parameter is set to fence , there will be multiple fence notifications on failure, one for each resource for which this parameter is set plus one additional notification. Both the STONITH daemon and the crmd daemon will send notifications. Pacemaker performs only one actual fence operation in this case, however, no matter how many notifications are sent. Note The alerts interface is designed to be backward compatible with the external scripts interface used by the ocf:pacemaker:ClusterMon resource. To preserve this compatibility, the environment variables passed to alert agents are available prepended with CRM_notify_ as well as CRM_alert_ . One break in compatibility is that the ClusterMon resource ran external scripts as the root user, while alert agents are run as the hacluster user. For information on configuring scripts that are triggered by the ClusterMon , see Section 11.2, "Event Notification with Monitoring Resources" .
|
[
"install --mode=0755 /usr/share/pacemaker/alerts/alert_snmp.sh.sample /var/lib/pacemaker/alert_snmp.sh",
"pcs alert create id=snmp_alert path=/var/lib/pacemaker/alert_snmp.sh meta timestamp-format=\"%Y-%m-%d,%H:%M:%S.%01N\" . pcs alert recipient add snmp_alert 192.168.1.2 pcs alert Alerts: Alert: snmp_alert (path=/var/lib/pacemaker/alert_snmp.sh) Meta options: timestamp-format=%Y-%m-%d,%H:%M:%S.%01N. Recipients: Recipient: snmp_alert-recipient (value=192.168.1.2)",
"install --mode=0755 /usr/share/pacemaker/alerts/alert_smtp.sh.sample /var/lib/pacemaker/alert_smtp.sh pcs alert create id=smtp_alert path=/var/lib/pacemaker/alert_smtp.sh options [email protected] pcs alert recipient add smtp_alert [email protected] pcs alert Alerts: Alert: smtp_alert (path=/var/lib/pacemaker/alert_smtp.sh) Options: [email protected] Recipients: Recipient: smtp_alert-recipient ([email protected])",
"pcs alert create path= path [id= alert-id ] [description= description ] [options [ option = value ]...] [meta [ meta-option = value ]...]",
"pcs alert create id=my_alert path=/path/to/myscript.sh",
"pcs alert [config|show]",
"pcs alert update alert-id [path= path ] [description= description ] [options [ option = value ]...] [meta [ meta-option = value ]...]",
"pcs alert remove alert-id",
"pcs alert recipient add alert-id recipient-value [id= recipient-id ] [description= description ] [options [ option = value ]...] [meta [ meta-option = value ]...]",
"pcs alert recipient update recipient-id [value= recipient-value ] [description= description ] [options [ option = value ]...] [meta [ meta-option = value ]...]",
"pcs alert recipient remove recipient-id",
"pcs alert recipient add my-alert my-alert-recipient id=my-recipient-id options value=some-address",
"pcs alert create id=my-alert path=/path/to/my-script.sh meta timeout=15s pcs alert recipient add my-alert [email protected] id=my-alert-recipient1 meta timestamp-format=%D %H:%M pcs alert recipient add my-alert [email protected] id=my-alert-recipient2 meta timestamp-format=%c",
"pcs alert create path=/my/path pcs alert recipient add alert rec_value pcs alert recipient add alert rec_value2 id=my-recipient pcs alert config Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value) Recipient: my-recipient (value=rec_value2)",
"pcs alert create id=my-alert path=/path/to/script description=alert_description options option1=value1 opt=val meta meta-option1=2 m=val pcs alert recipient add my-alert my-other-recipient pcs alert Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value) Recipient: my-recipient (value=rec_value2) Alert: my-alert (path=/path/to/script) Description: alert_description Options: opt=val option1=value1 Meta options: m=val meta-option1=2 Recipients: Recipient: my-alert-recipient (value=my-other-recipient)",
"pcs alert update my-alert options option1=newvalue1 meta m=newval pcs alert recipient update my-alert-recipient options option1=new meta metaopt1=newopt pcs alert Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value) Recipient: my-recipient (value=rec_value2) Alert: my-alert (path=/path/to/script) Description: alert_description Options: opt=val option1=newvalue1 Meta options: m=newval meta-option1=2 Recipients: Recipient: my-alert-recipient (value=my-other-recipient) Options: option1=new Meta options: metaopt1=newopt",
"pcs alert recipient remove my-recipient pcs alert Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value) Alert: my-alert (path=/path/to/script) Description: alert_description Options: opt=val option1=newvalue1 Meta options: m=newval meta-option1=2 Recipients: Recipient: my-alert-recipient (value=my-other-recipient) Options: option1=new Meta options: metaopt1=newopt",
"pcs alert remove my-alert pcs alert Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value)"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/ch-alertscripts-haar
|
4. Storage and Filesystems
|
4. Storage and Filesystems The ext4 Filesystem The ext4 file system is a scalable extension of the ext3 file system, which was the default file system of Red Hat Enterprise Linux 5. Ext4 is now the default file system of Red Hat Enterprise Linux 6 Because of delayed allocation and other performance optimizations, ext4's behavior of writing files to disk is different from ext3. In ext4, a program's writes to the file system are not guaranteed to be on-disk unless the program issues an fsync() call afterwards. Further information on the allocation features of ext4 is available in the Storage Administration Guide CIFS servers that require plaintext passwords Some Common Internet File System (CIFS) servers require plaintext passwords for authentication. Support for plaintext password authentication can be enabled using the command: Warning This operation can expose passwords by removing password encryption. Event Tracing in GFS2 GFS2's event tracing is provided via the generic tracing infrastructure. The events are designed to be useful for debugging purposes. Note, however that it is not guaranteed that the GFS2 events will remain the same throughout the lifetime of Red Hat Enterprise Linux 6. Further details on GFS2's glocks and event tracing can be found in the following 2009 Linus Symposium paper: http://kernel.org/doc/ols/2009/ols2009-pages-311-318.pdf mpi-selector The mpi-selector package has been deprecated in Red Hat Enterprise Linux 6. environment-modules is now used to select which Message Passing Interface (MPI) implementation is to be used. Note The man page for the module command contains detailed documentation for the environment-modules package. To return a list of what modules are available, use: To load or unload a module use the following commands: To emulate the behavior of mpi-selector, the module load commands must be place in the shell init script (e.g. /.bashrc ) to load the modules every login. 4.1. Technology Previews fsfreeze Red Hat Enterprise Linux 6 includes fsfreeze as a Technology Preview. fsfreeze is a new command that halts access to a filesystem on disk. fsfreeze is designed to be used with hardware RAID devices, assisting in the creation of volume snapshots. Further details on fsfreeze are in the fsfreeze(8) man page. DIF/DIX support DIF/DIX, is a new addition to the SCSI Standard and a Technology Preview in Red Hat Enterprise Linux 6. DIF/DIX increases the size of the commonly used 512-byte disk block from 512 to 520 bytes, adding the Data Integrity Field (DIF). The DIF stores a checksum value for the data block that is calculated by the Host Bus Adapter (HBA) when a write occurs. The storage device then confirms the checksum on receive, and stores both the data and the checksum. Conversely, when a read occurs, the checksum can be checked by the storage device, and by the receiving HBA. The DIF/DIX hardware checksum feature must only be used with applications that exclusively issue O_DIRECT I/O. These applications may use the raw block device, or the XFS file system in O_DIRECT mode. (XFS is the only filesystem that does not fall back to buffered IO when doing certain allocation operations.) Only applications designed for use with O_DIRECT I/O and DIF/DIX hardware should enable this feature. Red Hat Enterprise Linux 6 includes the Emulex LPFC driver version 8.3.5.17, introducing support for DIF/DIX. For more information, refer to the Storage Administration Guide Filesystem in Userspace Filesystem in Userspace (FUSE) allows for custom filesystems to be developed and run in user-space. LVM Snapshots of Mirrors The LVM snapshot feature provides the ability to create backup images of a logical volume at a particular instant without causing a service interruption. When a change is made to the original device (the origin) after a snapshot is taken, the snapshot feature makes a copy of the changed data area as it was prior to the change so that it can reconstruct the state of the device. Red Hat Enterprise Linux 6 introduces the ability to take a snapshot of a mirrored logical volume. A known issue exists with this Technology Preview. I/O might hang if a device failure in the mirror is encountered. Note, that this issue is related to a failure of the mirror log device, and that no work around is currently known. btrfs Btrfs is under development as a file system capable of addressing and managing more files, larger files, and larger volumes than the ext2, ext3, and ext4 file systems. Btrfs is designed to make the file system tolerant of errors, and to facilitate the detection and repair of errors when they occur. It uses checksums to ensure the validity of data and metadata, and maintains snapshots of the file system that can be used for backup or repair. The btrfs Technology Preview is only available on the x86_64 architecture. Warning Red Hat Enterprise Linux 6 Beta includes Btrfs as a technology preview to allow you to experiment with this file system. You should not choose Btrfs for partitions that will contain valuable data or that are essential for the operation of important systems. LVM Application Programming Interface (API) Red Hat Enterprise Linux 6 Beta features the new LVM application programming interface (API) as a Technology Preview. This API is used to query and control certain aspects of LVM. FS-Cache FS-Cache is a new feature in Red Hat Enterprise Linux 6 Beta that enables networked file systems (e.g. NFS) to have a persistent cache of data on the client machine. eCryptfs File System eCryptfs is a stacked, cryptographic file system. It is transparent to the underlying file system and provides per-file granularity. eCryptfs is provided as a Technology Preview in Red Hat Enterprise Linux 6.
|
[
"echo 0x37 > /proc/fs/cifs/SecurityFlags",
"module avail",
"module load <module-name> module unload <module-name>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/storage
|
Updating OpenShift Data Foundation
|
Updating OpenShift Data Foundation Red Hat OpenShift Data Foundation 4.15 Instructions for cluster and storage administrators regarding upgrading Red Hat Storage Documentation Team Abstract This document explains how to update versions of Red Hat OpenShift Data Foundation.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/updating_openshift_data_foundation/index
|
Chapter 63. recordset
|
Chapter 63. recordset This chapter describes the commands under the recordset command. 63.1. recordset create Create new recordset Usage: Table 63.1. Positional arguments Value Summary zone_id Zone id name Recordset name Table 63.2. Command arguments Value Summary -h, --help Show this help message and exit --record RECORD Recordset record, repeat if necessary --type TYPE Recordset type --ttl TTL Time to live (seconds) --description DESCRIPTION Description --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 63.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 63.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 63.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 63.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 63.2. recordset delete Delete recordset Usage: Table 63.7. Positional arguments Value Summary zone_id Zone id id Recordset id Table 63.8. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None --edit-managed Edit resources marked as managed. default: false Table 63.9. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 63.10. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 63.11. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 63.12. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 63.3. recordset list List recordsets Usage: Table 63.13. Positional arguments Value Summary zone_id Zone id. to list all recordsets specify all Table 63.14. Command arguments Value Summary -h, --help Show this help message and exit --name NAME Recordset name --type TYPE Recordset type --data DATA Recordset record data --ttl TTL Time to live (seconds) --description DESCRIPTION Description --status STATUS Recordset status --action ACTION Recordset action --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 63.15. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 63.16. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 63.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 63.18. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 63.4. recordset set Set recordset properties Usage: Table 63.19. Positional arguments Value Summary zone_id Zone id id Recordset id Table 63.20. Command arguments Value Summary -h, --help Show this help message and exit --record RECORD Recordset record, repeat if necessary --description DESCRIPTION Description --no-description- ttl TTL Ttl --no-ttl- all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None --edit-managed Edit resources marked as managed. default: false Table 63.21. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 63.22. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 63.23. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 63.24. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 63.5. recordset show Show recordset details Usage: Table 63.25. Positional arguments Value Summary zone_id Zone id id Recordset id Table 63.26. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 63.27. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 63.28. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 63.29. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 63.30. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
|
[
"openstack recordset create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --record RECORD --type TYPE [--ttl TTL] [--description DESCRIPTION] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] zone_id name",
"openstack recordset delete [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] [--edit-managed] zone_id id",
"openstack recordset list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--name NAME] [--type TYPE] [--data DATA] [--ttl TTL] [--description DESCRIPTION] [--status STATUS] [--action ACTION] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] zone_id",
"openstack recordset set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--record RECORD] [--description DESCRIPTION | --no-description] [--ttl TTL | --no-ttl] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] [--edit-managed] zone_id id",
"openstack recordset show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] zone_id id"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/recordset
|
18.12. Checking Access Rights on Entries (Get Effective Rights)
|
18.12. Checking Access Rights on Entries (Get Effective Rights) Finding the access rights that a user has on attributes within a specific entry offers a convenient way for administrators to find and control the access rights. Get effective rights is a way to extend directory searches to display what access rights - such as read, search, write and self-write, add, and delete - a user has to a specified entry. In Directory Server, regular users can check their rights over entries which they can view and can check other people's access to their personal entries. The Directory Manager can check rights that one user has over another user. There are two common situations where checking the effective rights on an entry are useful: An administrator can use the get effective rights command in order to better organize access control instructions for the directory. It is frequently necessary to restrict what one group of users can view or edit versus another group. For instance, members of the QA Managers group may have the right to search and read attributes like manager and salary but only HR Group members have the rights to modify or delete them. Checking the effective rights for a user or group is one way to verify that the appropriate access controls are in place. A user can run the get effective rights command to see what attributes he can view or modify on his personal entry. For instance, a user should have access to attributes such as homePostalAddress and cn but may only have read access to manager and salary attributes. There are three entities involved in a getEffectiveRights search. The first is the requester , which is the authenticated entry when the getEffectiveRights search operation is issued. The second is the subject whose rights will be evaluated, it is defined as authorization DN in the GER control. The third is the target , which is defined by the search base, search filter, and attribute list of the request. 18.12.1. Rights Shown with a Get Effective Rights Search Any get effective rights search, when searching for it in the command line, shows the rights that the requestor has to a target entry. There are two kinds of access rights that can be allowed to any entry. The first are upper-level rights, rights on the entry itself , which means that kinds of operations that the User A can perform on User B's entry as a whole. The second level of access rights are more granular, show what rights for a given attribute User A has. In this case, User A may have different kinds of access permissions for different attributes in the same entry. Whatever access controls are allowed for a user are the effective rights over that entry. For example: Table 18.2, "Entry Rights" and Table 18.3, "Attribute Rights" show the access rights to entries and attributes, respectively, that are returned by a get effective rights search. Table 18.2. Entry Rights Permission Description a Add an entry. d Delete this entry. n Rename the DN. v View the entry. Table 18.3. Attribute Rights Permission Description r Read. s Search. w Write ( mod-add ). o Obliterate( mod-del ). Analogous to delete. c Compare. W Self-write. O Self-delete. 18.12.2. The Format of a Get Effective Rights Search Get effective rights (sometimes called GER) is an extended directory search; the GER parameters are defined with the -E option to pass an LDAP control with the ldapsearch command. (If an ldapsearch is run without the -E option, then, naturally, the entry is returned as normal, without any get effective rights information.) -b is the base DN of the subtree or entry used to search for the GER subject. If the search base is a specific entry DN or if only one entry is returned, then the results show the rights the requester has over that specific entry. If multiple entries beneath the search base match the filter, then the search returns every matching entry, with the rights for the requester over each entry. 1.3.6.1.4.1.42.2.27.9.5.2 is the OID for the get effective rights control. The exclamation point ( ! ) specifies whether the search operation should return an error if the server does not support this control ( ! ) or if it should be ignored and let the search return as normal (nothing). The GER_subject is the person whose rights are being checked. If the GER_subject is left blank ( dn: ), then the rights of an anonymous user are returned. An optional attributeList limits the get effective rights results to the specified attribute or object class. As with a regular ldapsearch , this can give specific attributes, like mail . If no attributes are listed, then every present attribute for the entry is returned. Using an asterisk ( * ) returns the rights for every possible attribute for the entry, both existing attribute and non-existent attributes. Using an plus sign ( + ) returns operational attributes for the entry. Examples for checking rights for specific attributes are given in Section 18.12.3.2, "Examples of Get Effective Rights Searches for Non-Existent Attributes" and Section 18.12.3.3, "Examples of Get Effective Rights Searches for Specific Attributes or Object Classes" . The crux of a get effective rights search is the ability to check what rights the GER subject ( -E ) has to the targets of the search ( -b ). The get effective rights search is a regular ldapsearch , in that it simply looks for entries that match the search parameters and returns their information. The get effective rights option adds extra information to those search results, showing what rights a specific user has over those results. That GER subject user can be the requester himself ( -D is the same as -E ) or someone else. If the requester is a regular user (not the Directory Manager), then the requester can only see the effective that a GER subject has on the requester's own entry. That is, if John Smith runs a request to see what effective rights Babs Jensen has, then he can only get the effective rights that Babs Jensen has on his own entry. All of the other entries return an insufficient access error for the effective rights. There are three general scenarios for a regular user when running a get effective rights search: User A checks the rights that he has over other directory entries. User A checks the rights that he has to his personal entry. User A checks the rights that User B has to User A's entry. The get effective rights search has a number of flexible different ways that it can check rights on attributes. 18.12.3. Examples of GER Searches There are a number of different ways to run GER searches, depending on the exact type of information that needs to be returned and the types of entries and attributes being searched. 18.12.3.1. General Examples on Checking Access Rights One common scenario for effective rights searches is for a regular user to determine what changes he can make to his personal entry. For example, Ted Morris wants to check the rights he has to his entry. Both the -D and -E options give his entry as the requester. Since he is checking his personal entry, the -b option also contains his DN. Example 18.36. Checking Personal Rights (User A to User A) Ted Morris may, for example, be a manager or work in a department where he has to edit other user's entries, such as IT or human resources. In this case, he may want to check what rights he has to another user's entry, as in Example 18.37, "Personally Checking the Rights of One User over Another (User A to User B)" , where Ted ( -D ) checks his rights ( -E ) to Dave Miller's entry ( -b ): Example 18.37. Personally Checking the Rights of One User over Another (User A to User B) For all attributes, Ted Morris has read, search, compare, modify, and delete permissions to Dave Miller's entry. These results are different than the ones returned in checking Ted Morris's access to his own entry, since he personally had only read, search, and compare rights to most of these attributes. The Directory Manager has the ability to check the rights that one user has over another user's entry. In Example 18.38, "The Directory Manager's Checking the Rights of One User over Another (User A to User B)" , the Directory Manager is checking the rights that a manager, Jane Smith ( -E ), has over her subordinate, Ted Morris ( -b ): Example 18.38. The Directory Manager's Checking the Rights of One User over Another (User A to User B) Only an administrator can retrieve the effective rights that a different user has on an entry. If Ted Morris tried to determine Dave Miller's rights to Dave Miller's entry, then he would receive an insufficient access error: However, a regular user can run a get effective rights search to see what rights another user has to his personal entry. In Example 18.39, "Checking the Rights Someone Else Has to a Personal Entry" , Ted Morris checks what rights Dave Miller has on Ted Morris's entry. Example 18.39. Checking the Rights Someone Else Has to a Personal Entry In this case, Dave Miller has the right to view the DN of the entry and to read, search, and compare the ou , givenName , l , and other attributes, and no rights to the userPassword attribute. 18.12.3.2. Examples of Get Effective Rights Searches for Non-Existent Attributes By default, information is not given for attributes in an entry that do not have a value; for example, if the userPassword value is removed, then a future effective rights search on the entry above would not return any effective rights for userPassword , even though self-write and self-delete rights could be allowed. Using an asterisk ( * ) with the get effective rights search returns every attribute available for the entry, including attributes not set on the entry. Example 18.40. Returning Effective Rights for Non-Existent Attributes All of the attributes available for the entry, such as secretary , are listed, even though that attribute is non-existent. 18.12.3.3. Examples of Get Effective Rights Searches for Specific Attributes or Object Classes Taking the attribute-related GER searches further, it is possible to search for the rights to a specific attribute and set of attributes and to list all of the attributes available for one of the object classes set on the entry. One of the options listed in the formatting example in Section 18.12.2, "The Format of a Get Effective Rights Search" is attributeList . To return the effective rights for only specific attributes, list the attributes, separated by spaces, at the end of the search command. Example 18.41. Get Effective Rights Results for Specific Attributes It is possible to specify a non-existent attribute in the attributeList , as with the initials attribute in Example 18.41, "Get Effective Rights Results for Specific Attributes" , to see the rights which are available, similar to using an asterisk to list all attributes. The Directory Manager can also list the rights for all of the attributes available to a specific object class. This option has the format attribute@objectClass . This returns two entries; the first for the specified GER subject and the second for a template entry for the object class. Example 18.42. Get Effective Rights Results for an Attribute within an Object Class Note Using the search format attribute@objectClass is only available if the requester ( -D ) is the Directory Manager. Using an asterisk ( * ) instead of a specific attribute returns all of the attributes (present and non-existent) for the specified GER subject and the full list of attributes for the object class template. Example 18.43. Get Effective Rights Results for All Attributes for an Object Class 18.12.3.4. Examples of Get Effective Rights Searches for Non-Existent Entries An administrator may want to check what rights a specific user ( jsmith ) would have to a non-existent user, based on the existing access control rules. For checking non-existent entries, the server generates a template entry within that subtree. For example, to check for the template entry cn=joe new user,cn=accounts,ou=people,dc=example,dc=com , the server creates cn=template,cn=accounts,ou=people,dc=example,dc=com . For checking a non-existent entry, the get effective rights search can use a specified object class to generate a template entry with all of the potential attributes of the (non-existent) entry. For cn=joe new user,cn=accounts,ou=people,dc=example,dc=com with a person object class ( @person ), the server generates cn=template_person_objectclass,cn=accounts,ou=people,dc=example,dc=com . When the server creates the template entry, it uses the first MUST attribute in the object class definition to create the RDN attribute (or it uses MAY if there is no MUST attribute). However, this may result in an erroneous RDN value which, in turn, violates or circumvents established ACIs for the given subtree. In that case, it is possible to specify the RDN value to use by passing it with the object class. This has the form @objectclass:rdn_attribute . For example, to check the rights of scarter for a non-existent Posix entry with uidNumber as its RDN: 18.12.3.5. Examples of Get Effective Rights Searches for Operational Attributes Operational attributes are not returned in regular ldapsearch es, including get effective rights searches. To return the information for the operational attributes, use the plus sign ( + ). This returns only the operational attributes that can be used in the entry. Example 18.44. Get Effective Rights Results for Operational Attributes 18.12.3.6. Examples of Get Effective Rights Results and Access Control Rules Get effective rights are returned according to whatever ACLs are in effect for the get effective rights subject entry. For example, this ACL is set and, for the purposes of this example, it is the only ACL set: Because the ACL does not include the dc=example,dc=com subtree, the get effective rights search shows that the user does not have any rights to the dc=example,dc=com entry: Example 18.45. Get Effective Rights Results with No ACL Set (Directory Manager) If a regular user, rather than Directory Manager, tried to run the same command, the result would simply be blank. Example 18.46. Get Effective Rights Results with No ACL Set (Regular User) 18.12.4. Get Effective Rights Return Codes If the criticality is not set for a get effective rights search and an error occurs, the regular entry information is returned, but, in place of rights for entryLevelRights and attributeLevelRights , an error code is returned. This code can give information on the configuration of the entry that was queried. Table 18.4, "Returned Result Codes" summarizes the error codes and the potential configuration information they can relay. Table 18.4. Returned Result Codes Code Description 0 Successfully completed. 1 Operation error. 12 The critical extension is unavailable. If the criticality expression is set to true and effective rights do not exist on the entry being queried, then this error is returned. 16 No such attribute. If an attribute is specifically queried for access rights but that attribute does not exist in the schema, this error is returned. 17 Undefined attribute type. 21 Invalid attribute syntax. 50 Insufficient rights. 52 Unavailable. 53 Unwilling to perform. 80 Other.
|
[
"entryLevelRights: vadn attributeLevelRights: givenName:rscWO, sn:rscW, objectClass:rsc, uid:rsc, cn:rscW",
"ldapsearch -x -D bind_dn -W -p server_port -h server_hostname -E [!]1.3.6.1.4.1.42.2.27.9.5.2=: GER_subject ( searchFilter ) attributeList",
"ldapsearch -x -p 389 -h server.example.com -D \"uid=tmorris,ou=people,dc=example,dc=com\" -W -b \"uid=tmorris,ou=people,dc=example,dc=com\" -E '!1.3.6.1.4.1.42.2.27.9.5.2=:dn:uid=tmorris,ou=people,dc=example,dc=com' \"(objectClass=*)\" dn: uid=tmorris,ou=People,dc=example,dc=com givenName: Ted sn: Morris ou: IT ou: People l: Santa Clara manager: uid=jsmith,ou=People,dc=example,dc=com roomNumber: 4117 mail: [email protected] facsimileTelephoneNumber: +1 408 555 5409 objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson uid: tmorris cn: Ted Morris userPassword: {SSHA}bz0uCmHZM5b357zwrCUCJs1IOHtMD6yqPyhxBA== entryLevelRights: v attributeLevelRights: givenName:rsc, sn:rsc, ou:rsc, l:rsc, manager:rsc, roomNumber:rscwo, mail:rscwo, facsimileTelephoneNumber:rscwo, objectClass:rsc, uid:rsc, cn:rsc, userPassword:wo",
"ldapsearch -p 389 -h server.example.com -D \"uid=tmorris,ou=people,dc=example,dc=com\" -W -b \"uid=dmiller,ou=people,dc=example,dc=com\" -E '!1.3.6.1.4.1.42.2.27.9.5.2=:dn:uid=tmorris,ou=people,dc=example,dc=com' \"(objectClass=*)\" dn: uid=dmiller,ou=People,dc=example,dc=com entryLevelRights: vad attributeLevelRights: givenName:rscwo, sn:rscwo, ou:rscwo, l:rscwo, manager:rsc, roomNumber:rscwo, mail:rscwo, facsimileTelephoneNumber:rscwo, objectClass:rscwo, uid:rscwo, cn:rscwo, userPassword:rswo",
"ldapsearch -p 389 -h server.example.com -D \"cn=Directory Manager\" -W -b \"uid=tmorris,ou=people,dc=example,dc=com\" -E '!1.3.6.1.4.1.42.2.27.9.5.2=:dn:uid=jsmith,ou=people,dc=example,dc=com' \"(objectClass=*)\" dn: uid=tmorris,ou=People,dc=example,dc=com entryLevelRights: vadn attributeLevelRights: givenName:rscwo, sn:rscwo, ou:rscwo, l:rscwo, manager:rscwo, roomNumber:rscwo, mail:rscwo, facsimileTelephoneNumber:rscwo, objectClass:rscwo, uid:rscwo, cn:rscwo, userPassword:rscwo",
"ldapsearch -p 389 -h server.example.com -D \"uid=dmiller,ou=people,dc=example,dc=com\" -W -b \"uid=tmorris,ou=people,dc=example,dc=com\" -E '!1.3.6.1.4.1.42.2.27.9.5.2=:dn:uid=tmorris,ou=people,dc=example,dc=com' \"(objectClass=*)\" ldap_search: Insufficient access ldap_search: additional info: get-effective-rights: requester has no g permission on the entry",
"ldapsearch -p 389 -h server.example.com -D \"uid=tmorris,ou=people,dc=example,dc=com\" -W -b \"uid=tmorris,ou=people,dc=example,dc=com\" -E '!1.3.6.1.4.1.42.2.27.9.5.2=:dn:uid=dmiller,ou=people,dc=example,dc=com' \"(objectClass=*)\" dn: uid=tmorris,ou=people,dc=example,dc=com entryLevelRights: v attributeLevelRights: givenName:rsc, sn:rsc, ou:rsc, l:rsc,manager:rsc, roomNumber:rsc, mail:rsc, facsimileTelephoneNumber:rsc, objectClass:rsc, uid:rsc, cn:rsc, userPassword:none",
"ldapsearch -D \"cn=Directory Manager\" -W -b \"uid=scarter,ou=people,dc=example,dc=com\" -E '!1.3.6.1.4.1.42.2.27.9.5.2=:dn:uid=scarter,ou=people,dc=example,dc=com' \"(objectclass=*)\" \"*\" dn: uid=scarter,ou=People,dc=example,dc=com givenName: Sam telephoneNumber: +1 408 555 4798 sn: Carter ou: Accounting ou: People l: Sunnyvale manager: uid=dmiller,ou=People,dc=example,dc=com roomNumber: 4612 mail: [email protected] facsimileTelephoneNumber: +1 408 555 9700 objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson uid: scarter cn: Sam Carter userPassword: {SSHA}Xd9Jt8g1UsHC8enNDrEmxj3iJPKQLItlDYdD9A== entryLevelRights: vadn attributeLevelRights: objectClass:rscwo, aci:rscwo, sn:rscwo, cn:rscwo, description:rscwo, seeAlso:rscwo, telephoneNumber:rscwo, userPassword:rscwo, destinationIndicator:rscwo, facsimileTelephoneNumber:rscwo, internationaliSDNNumber:rscwo, l:rscwo, ou:rscwo, physicalDeliveryOfficeName:rscwo, postOfficeBox:rscwo, postalAddress:rscwo, postalCode:rscwo, preferredDeliveryMethod:rscwo, registeredAddress:rscwo, st:rscwo, street:rscwo, teletexTerminalIdentifier:rscwo, telexNumber:rscwo, title:rscwo, x121Address:rscwo, audio:rscwo, businessCategory:rscwo, carLicense:rscwo, departmentNumber:rscwo, displayName:rscwo, employeeType:rscwo, employeeNumber:rscwo, givenName:rscwo, homePhone:rscwo, homePostalAddress:rscwo, initials:rscwo, jpegPhoto:rscwo, labeledUri:rscwo, manager:rscwo, mobile:rscwo, pager:rscwo, photo:rscwo, preferredLanguage:rscwo, mail:rscwo, o:rscwo, roomNumber:rscwo, secretary:rscwo, uid:rscwo,x500UniqueIdentifier:rscwo, userCertificate:rscwo, userSMIMECertificate:rscwo, userPKCS12:rscwo",
"ldapsearch -D \"cn=Directory Manager\" -W -b \"uid=scarter,ou=people,dc=example,dc=com\" -E '!1.3.6.1.4.1.42.2.27.9.5.2=:dn:uid=scarter,ou=people,dc=example,dc=com' \"(objectclass=*)\" cn mail initials dn: uid=scarter,ou=People,dc=example,dc=com cn: Sam Carter mail: [email protected] entryLevelRights: vadn attributeLevelRights: cn:rscwo, mail:rscwo, initials:rscwo",
"ldapsearch -D \"cn=Directory Manager\" -W -b \"uid=scarter,ou=people,dc=example,dc=com\" -E '!1.3.6.1.4.1.42.2.27.9.5.2=:dn:uid=scarter,ou=people,dc=example,dc=com' \"(objectclass=*)\" uidNumber@posixAccount dn: cn=template_posixaccount_objectclass,uid=scarter,ou=people,dc=example,dc=com uidnumber: (template_attribute) entryLevelRights: v attributeLevelRights: uidNumber:rsc",
"ldapsearch -D \"cn=Directory Manager\" -W -b \"uid=scarter,ou=people,dc=example,dc=com\" -E '!1.3.6.1.4.1.42.2.27.9.5.2=:dn:uid=scarter,ou=people,dc=example,dc=com' \"(objectclass=*)\" *@posixaccount dn: cn=template_posixaccount_objectclass,uid=scarter,ou=people,dc=example,dc=com objectClass: posixaccount objectClass: top homeDirectory: (template_attribute) gidNumber: (template_attribute) uidNumber: (template_attribute) uid: (template_attribute) cn: (template_attribute) entryLevelRights: v attributeLevelRights: cn:rsc, uid:rsc, uidNumber:rsc, gidNumber:rsc, homeDirectory:rsc, objectClass:rsc, userPassword:none, loginShell:rsc, gecos:rsc, description:rsc, aci:rsc",
"ldapsearch -D \"cn=Directory Manager\" -W -b \"ou=people,dc=example,dc=com\" -E '!1.3.6.1.4.1.42.2.27.9.5.2=:dn:uid=scarter,ou=people,dc=example,dc=com' \"(objectclass=*)\" @posixaccount:uidnumber dn: uidNumber=template_posixaccount_objectclass,ou=people,dc=example,dc=com entryLevelRights: v attributeLevelRights: description:rsc, gecos:rsc, loginShell:rsc, userPassword :rsc, objectClass:rsc, homeDirectory:rsc, gidNumber:rsc, uidNumber:rsc, uid: rsc, cn:rsc",
"ldapsearch -D \"cn=Directory Manager\" -W -x -b \"uid=scarter,ou=people,dc=example,dc=com\" -E '!1.3.6.1.4.1.42.2.27.9.5.2=:dn:uid=scarter,ou=people,dc=example,dc=com' \"(objectclass=*)\" \"+\" dn: uid=scarter,ou=People,dc=example,dc=com entryLevelRights: vadn attributeLevelRights: nsICQStatusText:rscwo, passwordGraceUserTime:rscwo, pwdGraceUserTime:rscwo, nsYIMStatusText:rscwo, modifyTimestamp:rscwo, passwordExpWarned:rscwo, pwdExpirationWarned:rscwo, entrydn:rscwo, aci:rscwo, nsSizeLimit:rscwo, nsAccountLock:rscwo, passwordExpirationTime:rscwo, entryid:rscwo, nsSchemaCSN:rscwo, nsRole:rscwo, retryCountResetTime:rscwo, ldapSchemas:rscwo, nsAIMStatusText:rscwo, copiedFrom:rscwo, nsICQStatusGraphic:rscwo, nsUniqueId:rscwo, creatorsName:rscwo, passwordRetryCount:rscwo, dncomp:rscwo, nsTimeLimit:rscwo, passwordHistory:rscwo, pwdHistory:rscwo, nscpEntryDN:rscwo, subschemaSubentry:rscwo, nsYIMStatusGraphic:rscwo, hasSubordinates:rscwo, pwdpolicysubentry:rscwo, nsAIMStatusGraphic:rscwo, nsRoleDN:rscwo, createTimestamp:rscwo, accountUnlockTime:rscwo, copyingFrom:rscwo, nsLookThroughLimit:rscwo, nsds5ReplConflict:rscwo, modifiersName:rscwo, parentid:rscwo, passwordAllowChangeTime:rscwo, nsBackendSuffix:rscwo, nsIdleTimeout:rscwo, ldapSyntaxes:rscwo, numSubordinates:rscwo",
"dn: dc=example,dc=com objectClass: top objectClass: domain dc: example aci: (target=ldap:///ou=Accounting,dc=example,dc=com)(targetattr=\"*\")(version 3.0; acl \"test acl\"; allow (read,search,compare) (userdn = \"ldap:///anyone\") ;) dn: ou=Accounting,dc=example,dc=com objectClass: top objectClass: organizationalUnit ou: Accounting",
"ldapsearch -D \"cn=Directory Manager\" -W -b \"dc=example,dc=com\" -E '!1.3.6.1.4.1.42.2.27.9.5.2=:dn:uid=scarter,ou=people,dc=example,dc=com' \"(objectclass=*)\" \"*@person\" dn: cn=template_person_objectclass,uid=scarter,ou=people,dc=example,dc=com objectClass: person objectClass: top cn: (template_attribute) sn: (template_attribute) description: (template_attribute) seeAlso: (template_attribute) telephoneNumber: (template_attribute) userPassword: (template_attribute) entryLevelRights: none attributeLevelRights: sn:none, cn:none, objectClass:none, description:none, seeAlso:none, telephoneNumber:none, userPassword:none, aci:none",
"ldapsearch -D \"uid=scarter,ou=people,dc=example,dc=com\" -W -b \"dc=example,dc=com\" -E '!1.3.6.1.4.1.42.2.27.9.5.2=:dn:uid=scarter,ou=people,dc=example,dc=com' \"(objectclass=*)\" \"*@person\""
] |
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/viewing_the_acis_for_an_entry-get_effective_rights_control
|
Deploying OpenShift Data Foundation using IBM Z infrastructure
|
Deploying OpenShift Data Foundation using IBM Z infrastructure Red Hat OpenShift Data Foundation 4.9 Instructions on deploying Red Hat OpenShift Data Foundation to use local storage on IBM Z infrastructure Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation to use local storage on IBM Z infrastructure. Note While this document refers only to IBM Z, all information in it also applies to LinuxONE.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_ibm_z_infrastructure/index
|
Chapter 15. Enabling Red Hat build of Keycloak Health checks
|
Chapter 15. Enabling Red Hat build of Keycloak Health checks Red Hat build of Keycloak has built in support for health checks. This chapter describes how to enable and use the Keycloak health checks. 15.1. Red Hat build of Keycloak Health checks Red Hat build of Keycloak exposed health endpoints are three: /health /health/live /health/ready The result is returned in json format and it looks as follows: { "status": "UP", "checks": [] } 15.2. Enabling the health checks It is possible to enable the health checks using the build time option health-enabled : bin/kc.[sh|bat] build --health-enabled=true By default, no check is returned from the health endpoints. 15.3. Using the health checks It is recommended that the health endpoints be monitored by external HTTP requests. Due to security measures that remove curl and other packages from the Red Hat build of Keycloak container image, local command-based monitoring will not function easily. If you are not using Red Hat build of Keycloak in a container, use whatever you want to access the health check endpoints. 15.3.1. curl You may use a simple HTTP HEAD request to determine the live or ready state of Red Hat build of Keycloak. curl is a good HTTP client for this purpose. If Red Hat build of Keycloak is deployed in a container, you must run this command from outside it due to the previously mentioned security measures. For example: curl --head -fsS http://localhost:8080/health/ready If the command returns with status 0, then Red Hat build of Keycloak is live or ready , depending on which endpoint you called. Otherwise there is a problem. 15.3.2. Kubernetes Define a HTTP Probe so that Kubernetes may externally monitor the health endpoints. Do not use a liveness command. 15.3.3. HEALTHCHECK The Dockerfile image HEALTHCHECK instruction defines a command that will be periodically executed inside the container as it runs. The Red Hat build of Keycloak container does not have any CLI HTTP clients installed. Consider installing curl as an additional RPM, as detailed by the Running Red Hat build of Keycloak in a container chapter. Note that your container may be less secure because of this. 15.4. Available Checks The table below shows the available checks. Check Description Requires Metrics Database Returns the status of the database connection pool. Yes For some checks, you'll need to also enable metrics as indicated by the Requires Metrics column. To enable metrics use the metrics-enabled option as follows: bin/kc.[sh|bat] build --health-enabled=true --metrics-enabled=true 15.5. Relevant options Value health-enabled 🛠 If the server should expose health check endpoints. If enabled, health checks are available at the /health , /health/ready and /health/live endpoints. CLI: --health-enabled Env: KC_HEALTH_ENABLED true , false (default)
|
[
"{ \"status\": \"UP\", \"checks\": [] }",
"bin/kc.[sh|bat] build --health-enabled=true",
"curl --head -fsS http://localhost:8080/health/ready",
"bin/kc.[sh|bat] build --health-enabled=true --metrics-enabled=true"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_guide/health-
|
Chapter 3. Configuration of HawtIO
|
Chapter 3. Configuration of HawtIO HawtIO and its plugins can configure their behaviours through System properties. 3.1. Configuration properties The following table lists the configuration properties for the HawtIO core system and various plugins. System Default Description hawtio.disableProxy false With this property set to true, ProxyServlet (/hawtio/proxy/*) can be disabled. This makes the Connect plugin unavailable, which means HawtIO can no longer connect to remote JVMs, but sometimes users might want to do so because of security if the Connect plugin is not used. hawtio.localAddressProbing true Whether local address probing for proxy allowlist is enabled or not upon startup. Set this property to false to disable it. hawtio.proxyAllowlist localhost, 127.0.0.1 Comma-separated allowlist for target hosts that Connect plugin can connect to via ProxyServlet. All hosts not listed in this allowlist are denied to connect for security reasons. This option can be set to * to allow all hosts. Prefixing an element of the list with "r:" allows to define a regex (example: localhost,r:myserver[0-9]+.mydomain.com) hawtio.redirect.scheme The scheme is to redirect the URL to the login page when authentication is required. hawtio.sessionTimeout The maximum time interval, in seconds, that the servlet container will keep this session open between client accesses. If this option is not configured, then HawtIO uses the default session timeout of the servlet container. 3.1.1. Quarkus For Quarkus, all those properties are configurable in application.properties or application.yaml with the quarkus.hawtio prefix. For example: quarkus.hawtio.disableProxy = true 3.1.2. Spring Boot For Spring Boot, all those properties are configurable in application.properties or application.yaml as is. For example: hawtio.disableProxy = true 3.2. Configuring Jolokia through system properties The Jolokia agent is deployed automatically with io.hawt.web.JolokiaConfiguredAgentServlet that extends Jolokia native org.jolokia.http.AgentServlet class, defined in hawtio-war/WEB-INF/web.xml . If you want to customize the Jolokia Servlet with the configuration parameters that are defined in the Jolokia documentation , you can pass them as System properties prefixed with jolokia . For example: jolokia.policyLocation = file:///opt/hawtio/my-jolokia-access.xml 3.2.1. RBAC Restrictor For some runtimes that support HawtIO RBAC (role-based access control), HawtIO provides a custom Jolokia Restrictor implementation that provides an additional layer of protection over JMX operations based on the ACL (access control list) policy. Warning You cannot use HawtIO RBAC with Quarkus and Spring Boot yet. Enabling the RBAC Restrictor on those runtimes only imposes additional load without any gains. To activate the HawtIO RBAC Restrictor, configure the Jolokia parameter restrictorClass via System property to use io.hawt.web.RBACRestrictor as follows: jolokia.restrictorClass = io.hawt.system.RBACRestrictor
|
[
"quarkus.hawtio.disableProxy = true",
"hawtio.disableProxy = true",
"jolokia.policyLocation = file:///opt/hawtio/my-jolokia-access.xml",
"jolokia.restrictorClass = io.hawt.system.RBACRestrictor"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/hawtio_diagnostic_console_guide/configuration-of-hawtio
|
7.3. Booting from the Network Using PXE
|
7.3. Booting from the Network Using PXE To boot with PXE, you need a properly configured server, and a network interface in your computer that supports PXE. For information on how to configure a PXE server, refer to Chapter 30, Setting Up an Installation Server . Configure the computer to boot from the network interface. This option is in the BIOS, and may be labeled Network Boot or Boot Services . Once you properly configure PXE booting, the computer can boot the Red Hat Enterprise Linux installation system without any other media. To boot a computer from a PXE server: Ensure that the network cable is attached. The link indicator light on the network socket should be lit, even if the computer is not switched on. Switch on the computer. A menu screen appears. Press the number key that corresponds to the desired option. If your PC does not boot from the netboot server, ensure that the BIOS is configured to boot first from the correct network interface. Some BIOS systems specify the network interface as a possible boot device, but do not support the PXE standard. Refer to your hardware documentation for more information. Note Some servers with multiple network interfaces might not assign eth0 to the first network interface as the firmware interface knows it, which can cause the installer to try to use a different network interface from the one that was used by PXE. To change this behavior, use the following in pxelinux.cfg/* config files: These configuration options above cause the installer to use the same network interface the firmware interface and PXE use. You can also use the following option: This option causes the installer to use the first network device it finds that is linked to a network switch.
|
[
"IPAPPEND 2 APPEND ksdevice=bootif",
"ksdevice=link"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-booting-from-pxe-x86
|
Chapter 6. Bug fixes
|
Chapter 6. Bug fixes This section describes the notable bug fixes introduced in Red Hat OpenShift Data Foundation 4.15. 6.1. Disaster recovery Fencing takes more time than expected Previously, fencing operations took more time than expected. This was due to reconcile of Ramen hub controller a couple of times and requeue with delay as extra checks were added to ensure that the fencing operation was complete on the managed cluster. With this fix, the hub controller is registered for the updates in fencing state. As a result, the updates of the fencing status change is received immediately and it takes less time to finish fencing operation. ( BZ#2249462 ) 6.2. Multicloud Object Gateway Multicloud Object Gateway failing to use the new internal certificate after rotation Previously, Multicloud Object Gateway (MCG) client was not able to connect to S3 using the new certificate unless the MCG endpoint pods were restarted. Even though the MCG endpoint pods were loading the certificate for the S3 service at the start of the pod, the changes in the certificate were not watched, which means that rotating a certificate was not affecting the endpoint till the pods were restarted. With this fix, a watch to check for the changes in certificate of the endpoint pods are added. As a result, the pods load the new certificate without the need for a restart. ( BZ#2237903 ) Regenerating S3 credentials for OBC in all namespaces Previously, the Multicloud Object Gateway command for obc regenerate did not have the flag app-namespace . This flag is available for the other object bucket claim (OBC) operations such as creation and deletion of OBC. With this fix, the app-namespace flag is added to the obc generate command. As a result, OBC regenerates S3 credentials in all namespaces. ( BZ#2242414 ) Signature validation failure Previously, in Multicloud Object Gateway, there was failure to verify signatures when operations fail as AWS's C++ software development kit (SDK) does not encode the "=" sign in signature calculations when it appears as a part of the key name. With this fix, MCG's decoding of the path in the HTTP request is fixed to successfully verify the signature. ( BZ#2265288 ) 6.3. Ceph Metadata server run out of memory and reports over-sized cache Previously, metadata server (MDS) would run out of memory as the standby-replay MDS daemons would not trim their caches. With this fix, the MDS trims its cache when in standby-replay. As a result MDS would not run out of memory. ( BZ#2141422 ) Ceph is inaccessible after crash or shutdown tests are run Previously, in a stretch cluster, when a monitor is revived and is in the probing stage for other monitors to receive the latest information such as MonitorMap or OSDMap , it is unable to enter stretch_mode . This prevents it from correctly setting the elector's disallowed_leaders list, which leads to the Monitors getting stuck in election and Ceph eventually becomes unresponsive. With this fix, the marked-down monitors are unconditionally added to the disallowed_leaders list. This fixes the problem of newly revived monitors having different disallowed_leaders set and getting stuck in an election. ( BZ#2241937 ) 6.4. Ceph container storage interface (CSI) Snapshot persistent volume claim in pending state Previously, creation of readonlymany (ROX) CephFS persistent volume claim (PVC) from snapshot source failed when a pool parameter was present in the storage class due to a bug. With this fix, the check for the pool parameter is removed as it is not required. As a result, creation of ROX CephFS PVC from a snapshot source will be successful. ( BZ#2248117 ) 6.5. OpenShift Data Foundation console Incorrect tooltip message for the raw capacity card Previously, the tooltip for the raw capacity card in the block pool page showed an incorrect message. With this fix, the tooltip content for the raw capacity card has been changed to display an appropriate message, "Raw capacity shows the total physical capacity from all the storage pools in the StorageSystem". ( BZ#2237895 ) System raw capacity card not showing external mode StorageSystem Previously, the System raw capacity card did not display Ceph external StorageSystem as the Multicloud Object Gateway (MCG) standalone and Ceph external StorageSystems were filtered out from the card. With this fix, only the StorageSystems that do not report the total capacity as per the information reported by the odf_system_raw_capacity_total_bytes metric is filtered out. As a result, any StorageSystem that reports the total raw capacity is displayed on the System raw capacity card and only the StorageSystems that do not report the total capacity is not displayed in the card. ( BZ#2257441 ) 6.6. Rook Provisioning object bucket claim with the same bucket name Previously, for the green field use case, creation of two object bucket claims (OBCs) with the same bucket name was successful from the user interface. Even though two OBCs were created, the second one pointed to invalid credentials. With this fix, creation of the second OBC with the same bucket name is blocked and it is no longer possible to create two OBCs with the same bucket name for green field use cases. ( BZ#2228785 ) Change of the parameter name for the Python script used in external mode deployment Previously, while deploying OpenShift Data Foundation using Ceph storage in external mode, the Python script used to extract Ceph cluster details had a parameter name, --cluster-name , which could be misunderstood to be the name of the Ceph cluster. However, it represented the name of the OpenShift cluster that the Ceph administrator provided. With this fix, the --cluster-name flag is changed to --k8s-cluster-name` . The legacy flag --cluster-name is also supported to cater to the upgraded clusters used in automation. ( BZ#2244609 ) Incorrect pod placement configurations while detecting Multus Network Attachment Definition CIDRS Previously, some OpenShift Data Foundation clusters failed where the network "canary" pods were scheduled on nodes without Multus cluster networks, as OpenShift Data Foundation did not process pod placement configurations correctly while detecting Multus Network Attachment Definition CIDRS. With this fix, OpenShift Data Foundation was fixed to process pod placement for Multus network "canary" pods. As a result, network "canary" scheduling errors are no longer experienced. ( BZ#2249678 ) Deployment strategy to avoid rook-ceph-exporter pod restart Previously, the rook-ceph-exporter pod restarted multiple times on a freshly installed HCI cluster that resulted in crashing of the exporter pod and the Ceph health showing the WARN status. This was because restarting the exporter using RollingRelease caused a race condition resulting in crash of the exporter. With this fix, the deployment strategy is changed to Recreate . As a result, exporter pods no longer crash and there is no more health WARN status of Ceph. ( BZ#2250995 ) rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a pod stuck in CrashLoopBackOff state Previously, the rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a pod was stuck in CrashLoopBackOff state as the RADOS Gateway (RGW) multisite zonegroup was not getting created and fetched, and the error handling was reporting wrong text. With this release, the error handling bug in multisite configuration is fixed and fetching the zonegroup is improved by fetching it for a particular rgw-realm that was created earlier. As a result, the multisite configuration and rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a pod gets created successfully. ( BZ#2253185 ) 6.7. Ceph monitoring TargetDown alert reported for ocs-metrics-exporter Previously, metrics endpoint of the ocs-metrics-exporter used to be unresponsive as persistent volume resync by ocs-metrics-exporter was blocked indefinitely. With this fix, the blocking operations from persistent volume resync in ocs-metrics-exporter is removed and the metrics endpoint is responsive. Also, the TargetDown alert for ocs-metrics-exporter no longer appears. ( BZ#2168042 ) Label references of object bucket claim alerts Previously, label for the object bucket claim alerts was not displayed correctly as the format for the label-template was wrong. Also, a blank object bucket claim name was displayed and the description text was incomplete. With this fix, the format is corrected. As a result, the description text is correct and complete with appropriate object bucket claim name. ( BZ#2188032 ) Discrepancy in storage metrics Previously, the capacity of a pool was reported incorrectly as a wrong metrics query was used in the Raw Capacity card in the Block Pool dashboard. With this fix, the metrics query in the user interface is updated. As a result, the metrics of the total capacity of a block pool is reported correctly. ( BZ#2252035 ) Add managedBy label to rook-ceph-exporter metrics and alerts Previously, the metrics generated by rook-ceph-exporter did not have the managedBy label. So, it was not possible for the OpenShift console user interface to identify from which StorageSystem the metrics are generated. With this fix, the managedBy label, which has the name of the StorageSystem as a value, is added through the OCS operator to the storage cluster's Monitoring spec. This spec is read by the Rook operator and it relabels the ceph-exporter's ServiceMonitor endpoint labels. As a result, all the metrics generated from this exporter will have the new label managedBy . ( BZ#2255491 ) 6.8. Must gather Must gather logs not collected after upgrade Previously, the must-gather tool failed to collect logs after the upgrade as Collection started <time> was seen twice. With this fix, the must-gather tool is updated to run the pre-install script only once. As a result, the tool is able to collect the logs successfully after upgrade. ( BZ#2255240 )
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/4.15_release_notes/bug_fixes
|
Chapter 1. Getting started with SELinux
|
Chapter 1. Getting started with SELinux Security Enhanced Linux (SELinux) provides an additional layer of system security. SELinux fundamentally answers the question: May <subject> do <action> to <object>? , for example: May a web server access files in users' home directories? 1.1. Introduction to SELinux The standard access policy based on the user, group, and other permissions, known as Discretionary Access Control (DAC), does not enable system administrators to create comprehensive and fine-grained security policies, such as restricting specific applications to only viewing log files, while allowing other applications to append new data to the log files. Security Enhanced Linux (SELinux) implements Mandatory Access Control (MAC). Every process and system resource has a special security label called an SELinux context . A SELinux context, sometimes referred to as an SELinux label , is an identifier which abstracts away the system-level details and focuses on the security properties of the entity. Not only does this provide a consistent way of referencing objects in the SELinux policy, but it also removes any ambiguity that can be found in other identification methods. For example, a file can have multiple valid path names on a system that makes use of bind mounts. The SELinux policy uses these contexts in a series of rules which define how processes can interact with each other and the various system resources. By default, the policy does not allow any interaction unless a rule explicitly grants access. Note Remember that SELinux policy rules are checked after DAC rules. SELinux policy rules are not used if DAC rules deny access first, which means that no SELinux denial is logged if the traditional DAC rules prevent the access. SELinux contexts have several fields: user, role, type, and security level. The SELinux type information is perhaps the most important when it comes to the SELinux policy, as the most common policy rule which defines the allowed interactions between processes and system resources uses SELinux types and not the full SELinux context. SELinux types end with _t . For example, the type name for the web server is httpd_t . The type context for files and directories normally found in /var/www/html/ is httpd_sys_content_t . The type contexts for files and directories normally found in /tmp and /var/tmp/ is tmp_t . The type context for web server ports is http_port_t . There is a policy rule that permits Apache (the web server process running as httpd_t ) to access files and directories with a context normally found in /var/www/html/ and other web server directories ( httpd_sys_content_t ). There is no allow rule in the policy for files normally found in /tmp and /var/tmp/ , so access is not permitted. With SELinux, even if Apache is compromised, and a malicious script gains access, it is still not able to access the /tmp directory. Figure 1.1. An example how can SELinux help to run Apache and MariaDB in a secure way. As the scheme shows, SELinux allows the Apache process running as httpd_t to access the /var/www/html/ directory and it denies the same process to access the /data/mysql/ directory because there is no allow rule for the httpd_t and mysqld_db_t type contexts. On the other hand, the MariaDB process running as mysqld_t is able to access the /data/mysql/ directory and SELinux also correctly denies the process with the mysqld_t type to access the /var/www/html/ directory labeled as httpd_sys_content_t . Additional resources selinux(8) man page and man pages listed by the apropos selinux command. Man pages listed by the man -k _selinux command when the selinux-policy-doc package is installed. The SELinux Coloring Book helps you to better understand SELinux basic concepts. SELinux Wiki FAQ 1.2. Benefits of running SELinux SELinux provides the following benefits: All processes and files are labeled. SELinux policy rules define how processes interact with files, as well as how processes interact with each other. Access is only allowed if an SELinux policy rule exists that specifically allows it. SELinux provides fine-grained access control. Stepping beyond traditional UNIX permissions that are controlled at user discretion and based on Linux user and group IDs, SELinux access decisions are based on all available information, such as an SELinux user, role, type, and, optionally, a security level. SELinux policy is administratively-defined and enforced system-wide. SELinux can mitigate privilege escalation attacks. Processes run in domains, and are therefore separated from each other. SELinux policy rules define how processes access files and other processes. If a process is compromised, the attacker only has access to the normal functions of that process, and to files the process has been configured to have access to. For example, if the Apache HTTP Server is compromised, an attacker cannot use that process to read files in user home directories, unless a specific SELinux policy rule was added or configured to allow such access. SELinux can enforce data confidentiality and integrity, and can protect processes from untrusted inputs. SELinux is designed to enhance existing security solutions, not replace antivirus software, secure passwords, firewalls, or other security systems. Even when running SELinux, it is important to continue to follow good security practices, such as keeping software up-to-date, using hard-to-guess passwords, and firewalls. 1.3. SELinux examples The following examples demonstrate how SELinux increases security: The default action is deny. If an SELinux policy rule does not exist to allow access, such as for a process opening a file, access is denied. SELinux can confine Linux users. A number of confined SELinux users exist in the SELinux policy. Linux users can be mapped to confined SELinux users to take advantage of the security rules and mechanisms applied to them. For example, mapping a Linux user to the SELinux user_u user, results in a Linux user that is not able to run unless configured otherwise set user ID (setuid) applications, such as sudo and su . Increased process and data separation. The concept of SELinux domains allows defining which processes can access certain files and directories. For example, when running SELinux, unless otherwise configured, an attacker cannot compromise a Samba server, and then use that Samba server as an attack vector to read and write to files used by other processes, such as MariaDB databases. SELinux helps mitigate the damage made by configuration mistakes. Domain Name System (DNS) servers often replicate information between each other in a zone transfer. Attackers can use zone transfers to update DNS servers with false information. When running the Berkeley Internet Name Domain (BIND) as a DNS server in RHEL, even if an administrator forgets to limit which servers can perform a zone transfer, the default SELinux policy prevent updates for zone files [1] that use zone transfers, by the BIND named daemon itself, and by other processes. Without SELinux, an attacker can misuse a vulnerability to path traversal on an Apache web server and access files and directories stored on the file system by using special elements such as ../ . If an attacker attempts an attack on a server running with SELinux in enforcing mode, SELinux denies access to files that the httpd process must not access. SELinux cannot block this type of attack completely but it effectively mitigates it. SELinux in enforcing mode successfully prevents exploitation of kernel NULL pointer dereference operators on non-SMAP platforms (CVE-2019-9213). Attackers use a vulnerability in the mmap function, which does not check mapping of a null page, for placing arbitrary code on this page. The deny_ptrace SELinux boolean and SELinux in enforcing mode protect systems from the PTRACE_TRACEME vulnerability (CVE-2019-13272). Such configuration prevents scenarios when an attacker can get root privileges. The nfs_export_all_rw and nfs_export_all_ro SELinux booleans provide an easy-to-use tool to prevent misconfigurations of Network File System (NFS) such as accidental sharing /home directories. Additional resources SELinux as a security pillar of an operating system - Real-world benefits and examples Knowledgebase article SELinux hardening with Ansible Knowledgebase article selinux-playbooks Github repository with Ansible playbooks for SELinux hardening 1.4. SELinux architecture and packages SELinux is a Linux Security Module (LSM) that is built into the Linux kernel. The SELinux subsystem in the kernel is driven by a security policy which is controlled by the administrator and loaded at boot. All security-relevant, kernel-level access operations on the system are intercepted by SELinux and examined in the context of the loaded security policy. If the loaded policy allows the operation, it continues. Otherwise, the operation is blocked and the process receives an error. SELinux decisions, such as allowing or disallowing access, are cached. This cache is known as the Access Vector Cache (AVC). When using these cached decisions, SELinux policy rules need to be checked less, which increases performance. Remember that SELinux policy rules have no effect if DAC rules deny access first. Raw audit messages are logged to the /var/log/audit/audit.log and they start with the type=AVC string. In RHEL 8, system services are controlled by the systemd daemon; systemd starts and stops all services, and users and processes communicate with systemd using the systemctl utility. The systemd daemon can consult the SELinux policy and check the label of the calling process and the label of the unit file that the caller tries to manage, and then ask SELinux whether or not the caller is allowed the access. This approach strengthens access control to critical system capabilities, which include starting and stopping system services. The systemd daemon also works as an SELinux Access Manager. It retrieves the label of the process running systemctl or the process that sent a D-Bus message to systemd . The daemon then looks up the label of the unit file that the process wanted to configure. Finally, systemd can retrieve information from the kernel if the SELinux policy allows the specific access between the process label and the unit file label. This means a compromised application that needs to interact with systemd for a specific service can now be confined by SELinux. Policy writers can also use these fine-grained controls to confine administrators. If a process is sending a D-Bus message to another process and if the SELinux policy does not allow the D-Bus communication of these two processes, then the system prints a USER_AVC denial message, and the D-Bus communication times out. Note that the D-Bus communication between two processes works bidirectionally. Important To avoid incorrect SELinux labeling and subsequent problems, ensure that you start services using a systemctl start command. RHEL 8 provides the following packages for working with SELinux: policies: selinux-policy-targeted , selinux-policy-mls tools: policycoreutils , policycoreutils-gui , libselinux-utils , policycoreutils-python-utils , setools-console , checkpolicy 1.5. SELinux states and modes SELinux can run in one of three modes: enforcing, permissive, or disabled. Enforcing mode is the default, and recommended, mode of operation; in enforcing mode SELinux operates normally, enforcing the loaded security policy on the entire system. In permissive mode, the system acts as if SELinux is enforcing the loaded security policy, including labeling objects and emitting access denial entries in the logs, but it does not actually deny any operations. While not recommended for production systems, permissive mode can be helpful for SELinux policy development and debugging. Disabled mode is strongly discouraged; not only does the system avoid enforcing the SELinux policy, it also avoids labeling any persistent objects such as files, making it difficult to enable SELinux in the future. Use the setenforce utility to change between enforcing and permissive mode. Changes made with setenforce do not persist across reboots. To change to enforcing mode, enter the setenforce 1 command as the Linux root user. To change to permissive mode, enter the setenforce 0 command. Use the getenforce utility to view the current SELinux mode: In Red Hat Enterprise Linux, you can set individual domains to permissive mode while the system runs in enforcing mode. For example, to make the httpd_t domain permissive: Note that permissive domains are a powerful tool that can compromise security of your system. Red Hat recommends to use permissive domains with caution, for example, when debugging a specific scenario. [1] Text files that include DNS information, such as hostname to IP address mappings.
|
[
"getenforce Enforcing",
"setenforce 0 getenforce Permissive",
"setenforce 1 getenforce Enforcing",
"semanage permissive -a httpd_t"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_selinux/getting-started-with-selinux_using-selinux
|
Chapter 12. Ansible-based overcloud registration
|
Chapter 12. Ansible-based overcloud registration Director uses Ansible-based methods to register overcloud nodes to the Red Hat Customer Portal or to a Red Hat Satellite Server. I:f you used the rhel-registration method from Red Hat OpenStack Platform versions, you must disable it and switch to the Ansible-based method. For more information, see Section 12.6, "Switching to the rhsm composable service" and Section 12.7, "rhel-registration to rhsm mappings" . In addition to the director-based registration method, you can also manually register after deployment. For more information, see Section 12.9, "Running Ansible-based registration manually" 12.1. Red Hat Subscription Manager (RHSM) composable service You can use the rhsm composable service to register overcloud nodes through Ansible. Each role in the default roles_data file contains a OS::TripleO::Services::Rhsm resource, which is disabled by default. To enable the service, register the resource to the rhsm composable service file: The rhsm composable service accepts a RhsmVars parameter, which you can use to define multiple sub-parameters relevant to your registration: You can also use the RhsmVars parameter in combination with role-specific parameters, for example, ControllerParameters , to provide flexibility when enabling specific repositories for different nodes types. 12.2. RhsmVars sub-parameters Use the following sub-parameters as part of the RhsmVars parameter when you configure the rhsm composable service. For more information about the Ansible parameters that are available, see the role documentation . rhsm Description rhsm_method Choose the registration method. Either portal , satellite , or disable . rhsm_org_id The organization that you want to use for registration. To locate this ID, run sudo subscription-manager orgs from the undercloud node. Enter your Red Hat credentials at the prompt, and use the resulting Key value. For more information on your organization ID, see Understanding the Red Hat Subscription Management Organization ID . rhsm_pool_ids The subscription pool ID that you want to use. Use this parameter if you do not want to auto-attach subscriptions. To locate this ID, run sudo subscription-manager list --available --all --matches="*Red Hat OpenStack*" from the undercloud node, and use the resulting Pool ID value. rhsm_activation_key The activation key that you want to use for registration. rhsm_autosubscribe Use this parameter to attach compatible subscriptions to this system automatically. Set the value to true to enable this feature. rhsm_baseurl The base URL for obtaining content. The default URL is the Red Hat Content Delivery Network. If you use a Satellite server, change this value to the base URL of your Satellite server content repositories. rhsm_server_hostname The hostname of the subscription management service for registration. The default is the Red Hat Subscription Management hostname. If you use a Satellite server, change this value to your Satellite server hostname. rhsm_repos A list of repositories that you want to enable. rhsm_username The username for registration. If possible, use activation keys for registration. rhsm_password The password for registration. If possible, use activation keys for registration. rhsm_release Red Hat Enterprise Linux release for pinning the repositories. This is set to 9.0 for Red Hat OpenStack Platform rhsm_rhsm_proxy_hostname The hostname for the HTTP proxy. For example: proxy.example.com . rhsm_rhsm_proxy_port The port for HTTP proxy communication. For example: 8080 . rhsm_rhsm_proxy_user The username to access the HTTP proxy. rhsm_rhsm_proxy_password The password to access the HTTP proxy. Important You can use rhsm_activation_key and rhsm_repos together only if rhsm_method is set to portal . If rhsm_method is set to satellite , you can only use either rhsm_activation_key or rhsm_repos . 12.3. Registering the overcloud with the rhsm composable service Create an environment file that enables and configures the rhsm composable service. Director uses this environment file to register and subscribe your nodes. Procedure Create an environment file named templates/rhsm.yml to store the configuration. Include your configuration in the environment file. For example: The resource_registry section associates the rhsm composable service with the OS::TripleO::Services::Rhsm resource, which is available on each role. The RhsmVars variable passes parameters to Ansible for configuring your Red Hat registration. Save the environment file. 12.4. Applying the rhsm composable service to different roles You can apply the rhsm composable service on a per-role basis. For example, you can apply different sets of configurations to Controller nodes, Compute nodes, and Ceph Storage nodes. Procedure Create an environment file named templates/rhsm.yml to store the configuration. Include your configuration in the environment file. For example: The resource_registry associates the rhsm composable service with the OS::TripleO::Services::Rhsm resource, which is available on each role. The ControllerParameters , ComputeParameters , and CephStorageParameters parameters each use a separate RhsmVars parameter to pass subscription details to their respective roles. Note Set the RhsmVars parameter within the CephStorageParameters parameter to use a Red Hat Ceph Storage subscription and repositories specific to Ceph Storage. Ensure the rhsm_repos parameter contains the standard Red Hat Enterprise Linux repositories instead of the Extended Update Support (EUS) repositories that Controller and Compute nodes require. Save the environment file. 12.5. Registering the overcloud to Red Hat Satellite Server Create an environment file that enables and configures the rhsm composable service to register nodes to Red Hat Satellite instead of the Red Hat Customer Portal. Procedure Create an environment file named templates/rhsm.yml to store the configuration. Include your configuration in the environment file. For example: The resource_registry associates the rhsm composable service with the OS::TripleO::Services::Rhsm resource, which is available on each role. The RhsmVars variable passes parameters to Ansible for configuring your Red Hat registration. Save the environment file. 12.6. Switching to the rhsm composable service The rhel-registration method runs a bash script to handle the overcloud registration. The scripts and environment files for this method are located in the core heat template collection at /usr/share/openstack-tripleo-heat-templates/extraconfig/pre_deploy/rhel-registration/ . Complete the following steps to switch from the rhel-registration method to the rhsm composable service. Procedure Exclude the rhel-registration environment files from future deployments operations. In most cases, exclude the following files: rhel-registration/environment-rhel-registration.yaml rhel-registration/rhel-registration-resource-registry.yaml If you use a custom roles_data file, ensure that each role in your roles_data file contains the OS::TripleO::Services::Rhsm composable service. For example: Add the environment file for rhsm composable service parameters to future deployment operations. This method replaces the rhel-registration parameters with the rhsm service parameters and changes the heat resource that enables the service from: To: You can also include the /usr/share/openstack-tripleo-heat-templates/environments/rhsm.yaml environment file with your deployment to enable the service. 12.7. rhel-registration to rhsm mappings To help transition your details from the rhel-registration method to the rhsm method, use the following table to map your parameters and values. rhel-registration rhsm / RhsmVars rhel_reg_method rhsm_method rhel_reg_org rhsm_org_id rhel_reg_pool_id rhsm_pool_ids rhel_reg_activation_key rhsm_activation_key rhel_reg_auto_attach rhsm_autosubscribe rhel_reg_sat_url rhsm_satellite_url rhel_reg_repos rhsm_repos rhel_reg_user rhsm_username rhel_reg_password rhsm_password rhel_reg_release rhsm_release rhel_reg_http_proxy_host rhsm_rhsm_proxy_hostname rhel_reg_http_proxy_port rhsm_rhsm_proxy_port rhel_reg_http_proxy_username rhsm_rhsm_proxy_user rhel_reg_http_proxy_password rhsm_rhsm_proxy_password 12.8. Deploying the overcloud with the rhsm composable service Deploy the overcloud with the rhsm composable service so that Ansible controls the registration process for your overcloud nodes. Procedure Include rhsm.yml environment file with the openstack overcloud deploy command: This enables the Ansible configuration of the overcloud and the Ansible-based registration. Wait until the overcloud deployment completes. Check the subscription details on your overcloud nodes. For example, log in to a Controller node and run the following commands: 12.9. Running Ansible-based registration manually You can perform manual Ansible-based registration on a deployed overcloud with the dynamic inventory script on the director node. Use this script to define node roles as host groups and then run a playbook against them with ansible-playbook . Use the following example playbook to register Controller nodes manually. Procedure Create a playbook that uses the redhat_subscription modules to register your nodes. For example, the following playbook applies to Controller nodes: This play contains three tasks: Register the node. Disable any auto-enabled repositories. Enable only the repositories relevant to the Controller node. The repositories are listed with the repos variable. After you deploy the overcloud, you can run the following command so that Ansible executes the playbook ( ansible-osp-registration.yml ) against your overcloud: This command performs the following actions: Runs the dynamic inventory script to get a list of host and their groups. Applies the playbook tasks to the nodes in the group defined in the hosts parameter of the playbook, which in this case is the Controller group.
|
[
"resource_registry: OS::TripleO::Services::Rhsm: /usr/share/openstack-tripleo-heat-templates/deployment/rhsm/rhsm-baremetal-ansible.yaml",
"parameter_defaults: RhsmVars: rhsm_repos: - rhel-9-for-x86_64-baseos-eus-rpms - rhel-9-for-x86_64-appstream-eus-rpms - rhel-9-for-x86_64-highavailability-eus-rpms ... rhsm_username: \"myusername\" rhsm_password: \"p@55w0rd!\" rhsm_org_id: \"1234567\" rhsm_release: 9.0",
"resource_registry: OS::TripleO::Services::Rhsm: /usr/share/openstack-tripleo-heat-templates/deployment/rhsm/rhsm-baremetal-ansible.yaml parameter_defaults: RhsmVars: rhsm_repos: - rhel-9-for-x86_64-baseos-eus-rpms - rhel-9-for-x86_64-appstream-eus-rpms - rhel-9-for-x86_64-highavailability-eus-rpms ... rhsm_username: \"myusername\" rhsm_password: \"p@55w0rd!\" rhsm_org_id: \"1234567\" rhsm_pool_ids: \"1a85f9223e3d5e43013e3d6e8ff506fd\" rhsm_method: \"portal\" rhsm_release: 9.0",
"resource_registry: OS::TripleO::Services::Rhsm: /usr/share/openstack-tripleo-heat-templates/deployment/rhsm/rhsm-baremetal-ansible.yaml parameter_defaults: ControllerParameters: RhsmVars: rhsm_repos: - rhel-9-for-x86_64-baseos-eus-rpms - rhel-9-for-x86_64-appstream-eus-rpms - rhel-9-for-x86_64-highavailability-eus-rpms - openstack-17-for-rhel-9-x86_64-rpms - fast-datapath-for-rhel-9-x86_64-rpms - rhceph-5-tools-for-rhel-9-x86_64-rpms rhsm_username: \"myusername\" rhsm_password: \"p@55w0rd!\" rhsm_org_id: \"1234567\" rhsm_pool_ids: \"55d251f1490556f3e75aa37e89e10ce5\" rhsm_method: \"portal\" rhsm_release: 9.0 ComputeParameters: RhsmVars: rhsm_repos: - rhel-9-for-x86_64-baseos-eus-rpms - rhel-9-for-x86_64-appstream-eus-rpms - rhel-9-for-x86_64-highavailability-eus-rpms - openstack-17-for-rhel-9-x86_64-rpms - rhceph-5-tools-for-rhel-9-x86_64-rpms - fast-datapath-for-rhel-9-x86_64-rpms rhsm_username: \"myusername\" rhsm_password: \"p@55w0rd!\" rhsm_org_id: \"1234567\" rhsm_pool_ids: \"55d251f1490556f3e75aa37e89e10ce5\" rhsm_method: \"portal\" rhsm_release: 9.0 CephStorageParameters: RhsmVars: rhsm_repos: - rhel-9-for-x86_64-baseos-rpms - rhel-9-for-x86_64-appstream-rpms - rhel-9-for-x86_64-highavailability-rpms - openstack-17-deployment-tools-for-rhel-9-x86_64-rpms rhsm_username: \"myusername\" rhsm_password: \"p@55w0rd!\" rhsm_org_id: \"1234567\" rhsm_pool_ids: \"68790a7aa2dc9dc50a9bc39fabc55e0d\" rhsm_method: \"portal\" rhsm_release: 9.0",
"resource_registry: OS::TripleO::Services::Rhsm: /usr/share/openstack-tripleo-heat-templates/deployment/rhsm/rhsm-baremetal-ansible.yaml parameter_defaults: RhsmVars: rhsm_activation_key: \"myactivationkey\" rhsm_method: \"satellite\" rhsm_org_id: \"ACME\" rhsm_server_hostname: \"satellite.example.com\" rhsm_baseurl: \"https://satellite.example.com/pulp/repos\" rhsm_release: 9.0",
"- name: Controller description: | Controller role that has all the controller services loaded and handles Database, Messaging and Network functions. CountDefault: 1 ServicesDefault: - OS::TripleO::Services::Rhsm",
"resource_registry: OS::TripleO::NodeExtraConfig: rhel-registration.yaml",
"resource_registry: OS::TripleO::Services::Rhsm: /usr/share/openstack-tripleo-heat-templates/deployment/rhsm/rhsm-baremetal-ansible.yaml",
"openstack overcloud deploy <other cli args> -e ~/templates/rhsm.yaml",
"sudo subscription-manager status sudo subscription-manager list --consumed",
"--- - name: Register Controller nodes hosts: Controller become: yes vars: repos: - rhel-9-for-x86_64-baseos-eus-rpms - rhel-9-for-x86_64-appstream-eus-rpms - rhel-9-for-x86_64-highavailability-eus-rpms - openstack-17-for-rhel-9-x86_64-rpms - fast-datapath-for-rhel-9-x86_64-rpms tasks: - name: Register system redhat_subscription: username: myusername password: p@55w0rd! org_id: 1234567 release: 9.0 pool_ids: 1a85f9223e3d5e43013e3d6e8ff506fd - name: Disable all repos command: \"subscription-manager repos --disable *\" - name: Enable Controller node repos command: \"subscription-manager repos --enable {{ item }}\" with_items: \"{{ repos }}\"",
"ansible-playbook -i /usr/bin/tripleo-ansible-inventory ansible-osp-registration.yml"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/director_installation_and_usage/assembly_ansible-based-overcloud-registration
|
2.3. keepalived Scheduling Overview
|
2.3. keepalived Scheduling Overview Using Keepalived provides a great deal of flexibility in distributing traffic across real servers, in part due to the variety of scheduling algorithms supported. Load balancing is superior to less flexible methods, such as Round-Robin DNS where the hierarchical nature of DNS and the caching by client machines can lead to load imbalances. Additionally, the low-level filtering employed by the LVS router has advantages over application-level request forwarding because balancing loads at the network packet level causes minimal computational overhead and allows for greater scalability. Using assigned weights gives arbitrary priorities to individual machines. Using this form of scheduling, it is possible to create a group of real servers using a variety of hardware and software combinations and the active router can evenly load each real server. The scheduling mechanism for Keepalived is provided by a collection of kernel patches called IP Virtual Server or IPVS modules. These modules enable layer 4 ( L4 ) transport layer switching, which is designed to work well with multiple servers on a single IP address. To track and route packets to the real servers efficiently, IPVS builds an IPVS table in the kernel. This table is used by the active LVS router to redirect requests from a virtual server address to and returning from real servers in the pool. 2.3.1. Keepalived Scheduling Algorithms The structure that the IPVS table takes depends on the scheduling algorithm that the administrator chooses for any given virtual server. To allow for maximum flexibility in the types of services you can cluster and how these services are scheduled, Keepalived supports the following scheduling algorithms listed below. Round-Robin Scheduling Distributes each request sequentially around the pool of real servers. Using this algorithm, all the real servers are treated as equals without regard to capacity or load. This scheduling model resembles round-robin DNS but is more granular due to the fact that it is network-connection based and not host-based. Load Balancer round-robin scheduling also does not suffer the imbalances caused by cached DNS queries. Weighted Round-Robin Scheduling Distributes each request sequentially around the pool of real servers but gives more jobs to servers with greater capacity. Capacity is indicated by a user-assigned weight factor, which is then adjusted upward or downward by dynamic load information. Weighted round-robin scheduling is a preferred choice if there are significant differences in the capacity of real servers in the pool. However, if the request load varies dramatically, the more heavily weighted server may answer more than its share of requests. Least-Connection Distributes more requests to real servers with fewer active connections. Because it keeps track of live connections to the real servers through the IPVS table, least-connection is a type of dynamic scheduling algorithm, making it a better choice if there is a high degree of variation in the request load. It is best suited for a real server pool where each member node has roughly the same capacity. If a group of servers have different capabilities, weighted least-connection scheduling is a better choice. Weighted Least-Connections Distributes more requests to servers with fewer active connections relative to their capacities. Capacity is indicated by a user-assigned weight, which is then adjusted upward or downward by dynamic load information. The addition of weighting makes this algorithm ideal when the real server pool contains hardware of varying capacity. Locality-Based Least-Connection Scheduling Distributes more requests to servers with fewer active connections relative to their destination IPs. This algorithm is designed for use in a proxy-cache server cluster. It routes the packets for an IP address to the server for that address unless that server is above its capacity and has a server in its half load, in which case it assigns the IP address to the least loaded real server. Locality-Based Least-Connection Scheduling with Replication Scheduling Distributes more requests to servers with fewer active connections relative to their destination IPs. This algorithm is also designed for use in a proxy-cache server cluster. It differs from Locality-Based Least-Connection Scheduling by mapping the target IP address to a subset of real server nodes. Requests are then routed to the server in this subset with the lowest number of connections. If all the nodes for the destination IP are above capacity, it replicates a new server for that destination IP address by adding the real server with the least connections from the overall pool of real servers to the subset of real servers for that destination IP. The most loaded node is then dropped from the real server subset to prevent over-replication. Destination Hash Scheduling Distributes requests to the pool of real servers by looking up the destination IP in a static hash table. This algorithm is designed for use in a proxy-cache server cluster. Source Hash Scheduling Distributes requests to the pool of real servers by looking up the source IP in a static hash table. This algorithm is designed for LVS routers with multiple firewalls. Shortest Expected Delay Distributes connection requests to the server that has the shortest delay expected based on number of connections on a given server divided by its assigned weight. Never Queue A two-pronged scheduler that first finds and sends connection requests to a server that is idling, or has no connections. If there are no idling servers, the scheduler defaults to the server that has the least delay in the same manner as Shortest Expected Delay . 2.3.2. Server Weight and Scheduling The administrator of Load Balancer can assign a weight to each node in the real server pool. This weight is an integer value which is factored into any weight-aware scheduling algorithms (such as weighted least-connections) and helps the LVS router more evenly load hardware with different capabilities. Weights work as a ratio relative to one another. For instance, if one real server has a weight of 1 and the other server has a weight of 5, then the server with a weight of 5 gets 5 connections for every 1 connection the other server gets. The default value for a real server weight is 1. Although adding weight to varying hardware configurations in a real server pool can help load-balance the cluster more efficiently, it can cause temporary imbalances when a real server is introduced to the real server pool and the virtual server is scheduled using weighted least-connections. For example, suppose there are three servers in the real server pool. Servers A and B are weighted at 1 and the third, server C, is weighted at 2. If server C goes down for any reason, servers A and B evenly distributes the abandoned load. However, once server C comes back online, the LVS router sees it has zero connections and floods the server with all incoming requests until it is on par with servers A and B.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/load_balancer_administration/s1-lvs-scheduling-VSA
|
probe::nfs.aop.write_begin
|
probe::nfs.aop.write_begin Name probe::nfs.aop.write_begin - NFS client begin to write data Synopsis nfs.aop.write_begin Values __page the address of page page_index offset within mapping, can used a page identifier and position identifier in the page frame size write bytes to end address of this write operation ino inode number offset start address of this write operation dev device identifier Description Occurs when write operation occurs on nfs. It prepare a page for writing, look for a request corresponding to the page. If there is one, and it belongs to another file, it flush it out before it tries to copy anything into the page. Also do the same if it finds a request from an existing dropped page
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-nfs-aop-write-begin
|
Migrating Camel Quarkus projects
|
Migrating Camel Quarkus projects Red Hat build of Apache Camel 4.8 Migrating Camel Quarkus projects
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/migrating_camel_quarkus_projects/index
|
1.5.2. Firewall Marks
|
1.5.2. Firewall Marks Firewall marks are an easy and efficient way to group ports used for a protocol or group of related protocols. For instance, if Load Balancer Add-On is deployed to run an e-commerce site, firewall marks can be used to bundle HTTP connections on port 80 and secure, HTTPS connections on port 443. By assigning the same firewall mark to the virtual server for each protocol, state information for the transaction can be preserved because the LVS router forwards all requests to the same real server after a connection is opened. Because of its efficiency and ease-of-use, administrators of Load Balancer Add-On should use firewall marks instead of persistence whenever possible for grouping connections. However, administrators should still add persistence to the virtual servers in conjunction with firewall marks to ensure the clients are reconnected to the same server for an adequate period of time.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/s2-lve-fwmarks-vsa
|
20.4. Caching Kerberos Passwords
|
20.4. Caching Kerberos Passwords A machine may not always be on the same network as the IdM domain; for example, a machine may need to be logged into a VPN before it can access the IdM domain. If a user logs into a system when it is offline and then later attempts to connect to IdM services, then the user is blocked because there is no IdM Kerberos ticket for that user. IdM works around that limitation by using SSSD to store the Kerberos passwords in the SSSD cache. This is configured by default by the ipa-client-install script. A configuration parameter is added to the /etc/sssd/sssd.conf file which specifically instructs SSSD to store those Kerberos passwords for the IdM domain: [domain/example.com] cache_credentials = True ipa_domain = example.com id_provider = ipa auth_provider = ipa access_provider = ipa chpass_provider = ipa ipa_server = _srv_, server.example.com krb5_store_password_if_offline = true This default behavior can be disabled during the client installation by using the --no-krb5-offline-passwords option. This behavior can also be disabled by editing the /etc/sssd/sssd.conf file and removing the krb5_store_password_if_offline line or changing its value to false. [domain/example.com] ... krb5_store_password_if_offline = false The SSSD configuration options for Kerberos authentication is covered in the "Configuring Domains" section of the SSSD chapter in the Red Hat Enterprise Linux Deployment Guide .
|
[
"[domain/example.com] cache_credentials = True ipa_domain = example.com id_provider = ipa auth_provider = ipa access_provider = ipa chpass_provider = ipa ipa_server = _srv_, server.example.com krb5_store_password_if_offline = true",
"[domain/example.com] krb5_store_password_if_offline = false"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/kerberos-pwd-cache
|
Chapter 8. Propagating SCAP Content through the Load Balancer
|
Chapter 8. Propagating SCAP Content through the Load Balancer If you use OpenSCAP to manage security compliance on your clients, you must configure the SCAP client to send ARF reports to the load balancer instead of Capsule. The configuration procedure depends on the method you have selected to deploy compliance policies. 8.1. Propagating SCAP Content using Ansible Deployment Using this procedure, you can promote Security Content Automation Protocol (SCAP) content through the load balancer in the scope of the Ansible deployment method. Prerequisite Ensure that you have configured Satellite for Ansible deployment of compliance policies. For more information, see Configuring Compliance Policy Deployment Methods in Administering Red Hat Satellite . Procedure In the Satellite web UI, navigate to Configure > Variables . Search for the foreman_scap_client_port variable and click its name. In the Default Behavior area, ensure that the Override checkbox is selected. In the Parameter Type list, ensure that integer is selected. In the Default Value field, enter 9090 . In the Specify Matchers area, remove all matchers that override the default value. Click Submit . Search for the foreman_scap_client_server variable and click its name. In the Default Behavior area, ensure that the Override checkbox is selected. In the Parameter Type list, ensure that string is selected. In the Default Value field, enter the FQDN of your load balancer, such as loadbalancer.example.com . In the Specify Matchers area, remove all matchers that override the default value. Click Submit . Continue with deploying a compliance policy using Ansible. For more information, see: Deploying a Policy in a Host Group Using Ansible in Administering Red Hat Satellite Deploying a Policy on a Host Using Ansible in Administering Red Hat Satellite Verification On the client, verify that the /etc/foreman_scap_client/config.yaml file contains the following lines: 8.2. Propagating SCAP Content using Puppet Deployment Using this procedure, you can promote Security Content Automation Protocol (SCAP) content through the load balancer in the scope of the Puppet deployment method. Prerequisite Ensure that you have configured Satellite for Puppet deployment of compliance policies. For more information, see Configuring Compliance Policy Deployment Methods in Administering Red Hat Satellite . Procedure In the Satellite web UI, navigate to Configure > Classes and click foreman_scap_client . Click the Smart Class Parameter tab. In the pane to the left of the Smart Class Parameter window, click port . In the Default Behavior area, select the Override checkbox. From the Key Type list, select integer . In the Default Value field, enter 9090 . In the pane to the left of the Smart Class Parameter window, click server . In the Default Behavior area, select the Override checkbox. From the Key Type list, select string . In the Default Value field, enter the FQDN of your load balancer, such as loadbalancer.example.com . In the lower left of the Smart Class Parameter window, click Submit . Continue with deploying a compliance policy using Puppet. For more information, see: Deploying a Policy in a Host Group Using Puppet in Administering Red Hat Satellite Deploying a Policy on a Host Using Puppet in Administering Red Hat Satellite Verification On the client, verify that the /etc/foreman_scap_client/config.yaml file contains the following lines:
|
[
"Foreman proxy to which reports should be uploaded :server: ' loadbalancer.example.com ' :port: 9090",
"Foreman proxy to which reports should be uploaded :server: ' loadbalancer.example.com ' :port: 9090"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/configuring_capsules_with_a_load_balancer/propagating-scap-content-through-the-load-balancer_load-balancing
|
Chapter 11. References
|
Chapter 11. References The following references are pointers to additional information that is relevant to SELinux and Red Hat Enterprise Linux but beyond the scope of this guide. Note that due to the rapid development of SELinux, some of this material may only apply to specific releases of Red Hat Enterprise Linux. Books SELinux by Example Mayer, MacMillan, and Caplan Prentice Hall, 2007 Tutorials and Help Tutorials and talks from Russell Coker http://www.coker.com.au/selinux/talks/ibmtu-2004/ Generic Writing SELinux policy HOWTO http://www.lurking-grue.org/writingselinuxpolicyHOWTO.html Red Hat Knowledgebase https://access.redhat.com/knowledgebase General Information NSA SELinux main website http://www.nsa.gov/selinux/ SELinux NSA's Open Source Security Enhanced Linux http://www.oreilly.com/catalog/selinux/ Technology Integrating Flexible Support for Security Policies into the Linux Operating System (a history of Flask implementation in Linux) http://www.nsa.gov/research/_files/selinux/papers/selsymp2005.pdf A Security Policy Configuration for the Security-Enhanced Linux http://www.nsa.gov/research/_files/selinux/papers/policy/policy.shtml Community SELinux community page http://selinuxproject.org/ IRC irc.freenode.net, #selinux, #fedora-selinux, #security History Quick history of Flask http://www.cs.utah.edu/flux/fluke/html/flask.html Full background on Fluke http://www.cs.utah.edu/flux/fluke/html/index.html
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/chap-security_guide-references
|
Chapter 12. Configuring IdM for external provisioning of users
|
Chapter 12. Configuring IdM for external provisioning of users As a system administrator, you can configure Identity Management (IdM) to support the provisioning of users by an external solution for managing identities. Rather than use the ipa utility, the administrator of the external provisioning system can access the IdM LDAP using the ldapmodify utility. The administrator can add individual stage users from the CLI using ldapmodify or using an LDIF file . The assumption is that you, as an IdM administrator, fully trust your external provisioning system to only add validated users. However, at the same time you do not want to assign the administrators of the external provisioning system the IdM role of User Administrator to enable them to add new active users directly. You can configure a script to automatically move the staged users created by the external provisioning system to active users automatically. This chapter contains these sections: Preparing Identity Management (IdM) to use an external provisioning system to add stage users to IdM. Creating a script to move the users added by the external provisioning system from stage to active users. Using an external provisioning system to add an IdM stage user. You can do that in two ways: Add an IdM stage user using an LDIF file Add an IdM stage user directly from the CLI using ldapmodify 12.1. Preparing IdM accounts for automatic activation of stage user accounts This procedure shows how to configure two IdM user accounts to be used by an external provisioning system. By adding the accounts to a group with an appropriate password policy, you enable the external provisioning system to manage user provisioning in IdM. In the following, the user account to be used by the external system to add stage users is named provisionator . The user account to be used to automatically activate the stage users is named activator . Prerequisites The host on which you perform the procedure is enrolled into IdM. Procedure Log in as IdM administrator: Create a user named provisionator with the privileges to add stage users. Add the provisionator user account: Grant the provisionator user the required privileges. Create a custom role, System Provisioning , to manage adding stage users: Add the Stage User Provisioning privilege to the role. This privilege provides the ability to add stage users: Add the provisionator user to the role: Verify that the provisionator exists in IdM: Create a user, activator , with the privileges to manage user accounts. Add the activator user account: Grant the activator user the required privileges by adding the user to the default User Administrator role: Create a user group for application accounts: Update the password policy for the group. The following policy prevents password expiration and lockout for the account but compensates the potential risks by requiring complex passwords: Optional: Verify that the password policy exists in IdM: Add the provisioning and activation accounts to the group for application accounts: Change the passwords for the user accounts: Changing the passwords is necessary because new IdM users passwords expire immediately. Additional resources: See Managing user accounts using the command line . See Delegating Permissions over Users . See Defining IdM Password Policies . 12.2. Configuring automatic activation of IdM stage user accounts This procedure shows how to create a script for activating stage users. The system runs the script automatically at specified time intervals. This ensures that new user accounts are automatically activated and available for use shortly after they are created. Important The procedure assumes that the owner of the external provisioning system has already validated the users and that they do not require additional validation on the IdM side before the script adds them to IdM. It is sufficient to enable the activation process on only one of your IdM servers. Prerequisites The provisionator and activator accounts exist in IdM. For details, see Preparing IdM accounts for automatic activation of stage user accounts . You have root privileges on the IdM server on which you are running the procedure. You are logged in as IdM administrator. You trust your external provisioning system. Procedure Generate a keytab file for the activation account: If you want to enable the activation process on more than one IdM server, generate the keytab file on one server only. Then copy the keytab file to the other servers. Create a script, /usr/local/sbin/ipa-activate-all , with the following contents to activate all users: Edit the permissions and ownership of the ipa-activate-all script to make it executable: Create a systemd unit file, /etc/systemd/system/ipa-activate-all.service , with the following contents: Create a systemd timer, /etc/systemd/system/ipa-activate-all.timer , with the following contents: Reload the new configuration: Enable ipa-activate-all.timer : Start ipa-activate-all.timer : Optional: Verify that the ipa-activate-all.timer daemon is running: 12.3. Adding an IdM stage user defined in an LDIF file Follow this procedure to access IdM LDAP and use an LDIF file to add stage users. While the example below shows adding one single user, multiple users can be added in one file in bulk mode. Prerequisites IdM administrator has created the provisionator account and a password for it. For details, see Preparing IdM accounts for automatic activation of stage user accounts . You as the external administrator know the password of the provisionator account. You can SSH to the IdM server from your LDAP server. You are able to supply the minimal set of attributes that an IdM stage user must have to allow the correct processing of the user life cycle, namely: The distinguished name (dn) The common name (cn) The last name (sn) The uid Procedure On the external server, create an LDIF file that contains information about the new user: Transfer the LDIF file from the external server to the IdM server: Use the SSH protocol to connect to the IdM server as provisionator : On the IdM server, obtain the Kerberos ticket-granting ticket (TGT) for the provisionator account: Enter the ldapadd command with the -f option and the name of the LDIF file. Specify the name of the IdM server and the port number: 12.4. Adding an IdM stage user directly from the CLI using ldapmodify Follow this procedure to access access Identity Management (IdM) LDAP and use the ldapmodify utility to add a stage user. Prerequisites The IdM administrator has created the provisionator account and a password for it. For details, see Preparing IdM accounts for automatic activation of stage user accounts . You as the external administrator know the password of the provisionator account. You can SSH to the IdM server from your LDAP server. You are able to supply the minimal set of attributes that an IdM stage user must have to allow the correct processing of the user life cycle, namely: The distinguished name (dn) The common name (cn) The last name (sn) The uid Procedure Use the SSH protocol to connect to the IdM server using your IdM identity and credentials: Obtain the TGT of the provisionator account, an IdM user with a role to add new stage users: Enter the ldapmodify command and specify Generic Security Services API (GSSAPI) as the Simple Authentication and Security Layer (SASL) mechanism to use for authentication. Specify the name of the IdM server and the port: Enter the dn of the user you are adding: Enter add as the type of change you are performing: Specify the LDAP object class categories required to allow the correct processing of the user life cycle: You can specify additional object classes. Enter the uid of the user: Enter the cn of the user: Enter the last name of the user: Press Enter again to confirm that this is the end of the entry: Exit the connection using Ctrl + C . Verification Verify the contents of the stage entry to make sure your provisioning system added all required POSIX attributes and the stage entry is ready to be activated. To display the new stage user's LDAP attributes, enter the ipa stageuser-show --all --raw command: Note that the user is explicitly disabled by the nsaccountlock attribute. 12.5. Additional resources See Using ldapmodify to manage IdM users externally .
|
[
"kinit admin",
"ipa user-add provisionator --first=provisioning --last=account --password",
"ipa role-add --desc \"Responsible for provisioning stage users\" \"System Provisioning\"",
"ipa role-add-privilege \"System Provisioning\" --privileges=\"Stage User Provisioning\"",
"ipa role-add-member --users=provisionator \"System Provisioning\"",
"ipa user-find provisionator --all --raw -------------- 1 user matched -------------- dn: uid=provisionator,cn=users,cn=accounts,dc=idm,dc=example,dc=com uid: provisionator [...]",
"ipa user-add activator --first=activation --last=account --password",
"ipa role-add-member --users=activator \"User Administrator\"",
"ipa group-add application-accounts",
"ipa pwpolicy-add application-accounts --maxlife=10000 --minlife=0 --history=0 --minclasses=4 --minlength=8 --priority=1 --maxfail=0 --failinterval=1 --lockouttime=0",
"ipa pwpolicy-show application-accounts Group: application-accounts Max lifetime (days): 10000 Min lifetime (hours): 0 History size: 0 [...]",
"ipa group-add-member application-accounts --users={provisionator,activator}",
"kpasswd provisionator kpasswd activator",
"ipa-getkeytab -s server.idm.example.com -p \"activator\" -k /etc/krb5.ipa-activation.keytab",
"#!/bin/bash kinit -k -i activator ipa stageuser-find --all --raw | grep \" uid:\" | cut -d \":\" -f 2 | while read uid; do ipa stageuser-activate USD{uid}; done",
"chmod 755 /usr/local/sbin/ipa-activate-all chown root:root /usr/local/sbin/ipa-activate-all",
"[Unit] Description=Scan IdM every minute for any stage users that must be activated [Service] Environment=KRB5_CLIENT_KTNAME=/etc/krb5.ipa-activation.keytab Environment=KRB5CCNAME=FILE:/tmp/krb5cc_ipa-activate-all ExecStart=/usr/local/sbin/ipa-activate-all",
"[Unit] Description=Scan IdM every minute for any stage users that must be activated [Timer] OnBootSec=15min OnUnitActiveSec=1min [Install] WantedBy=multi-user.target",
"systemctl daemon-reload",
"systemctl enable ipa-activate-all.timer",
"systemctl start ipa-activate-all.timer",
"systemctl status ipa-activate-all.timer ● ipa-activate-all.timer - Scan IdM every minute for any stage users that must be activated Loaded: loaded (/etc/systemd/system/ipa-activate-all.timer; enabled; vendor preset: disabled) Active: active (waiting) since Wed 2020-06-10 16:34:55 CEST; 15s ago Trigger: Wed 2020-06-10 16:35:55 CEST; 44s left Jun 10 16:34:55 server.idm.example.com systemd[1]: Started Scan IdM every minute for any stage users that must be activated.",
"dn: uid=stageidmuser,cn=staged users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com changetype: add objectClass: top objectClass: inetorgperson uid: stageidmuser sn: surname givenName: first_name cn: full_name",
"scp add-stageidmuser.ldif [email protected]:/provisionator/ Password: add-stageidmuser.ldif 100% 364 217.6KB/s 00:00",
"ssh [email protected] Password: [provisionator@server ~]USD",
"[provisionator@server ~]USD kinit provisionator",
"~]USD ldapadd -h server.idm.example.com -p 389 -f add-stageidmuser.ldif SASL/GSSAPI authentication started SASL username: [email protected] SASL SSF: 256 SASL data security layer installed. adding the entry \"uid=stageidmuser,cn=staged users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com\"",
"ssh [email protected] Password: [provisionator@server ~]USD",
"kinit provisionator",
"ldapmodify -h server.idm.example.com -p 389 -Y GSSAPI SASL/GSSAPI authentication started SASL username: [email protected] SASL SSF: 56 SASL data security layer installed.",
"dn: uid=stageuser,cn=staged users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com",
"changetype: add",
"objectClass: top objectClass: inetorgperson",
"uid: stageuser",
"cn: Babs Jensen",
"sn: Jensen",
"[Enter] adding new entry \"uid=stageuser,cn=staged users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com\"",
"ipa stageuser-show stageuser --all --raw dn: uid=stageuser,cn=staged users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com uid: stageuser sn: Jensen cn: Babs Jensen has_password: FALSE has_keytab: FALSE nsaccountlock: TRUE objectClass: top objectClass: inetorgperson objectClass: organizationalPerson objectClass: person"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_idm_users_groups_hosts_and_access_control_rules/configuring-IdM-for-external-provisioning-of-users_managing-users-groups-hosts
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_microsoft_azure/making-open-source-more-inclusive
|
Chapter 3. Creating applications
|
Chapter 3. Creating applications 3.1. Using templates The following sections provide an overview of templates, as well as how to use and create them. 3.1.1. Understanding templates A template describes a set of objects that can be parameterized and processed to produce a list of objects for creation by Red Hat OpenShift Service on AWS. A template can be processed to create anything you have permission to create within a project, for example services, build configurations, and deployment configurations. A template can also define a set of labels to apply to every object defined in the template. You can create a list of objects from a template using the CLI or, if a template has been uploaded to your project or the global template library, using the web console. 3.1.2. Uploading a template If you have a JSON or YAML file that defines a template, you can upload the template to projects using the CLI. This saves the template to the project for repeated use by any user with appropriate access to that project. Instructions about writing your own templates are provided later in this topic. Procedure Upload a template using one of the following methods: Upload a template to your current project's template library, pass the JSON or YAML file with the following command: USD oc create -f <filename> Upload a template to a different project using the -n option with the name of the project: USD oc create -f <filename> -n <project> The template is now available for selection using the web console or the CLI. 3.1.3. Creating an application by using the web console You can use the web console to create an application from a template. Procedure Select Developer from the context selector at the top of the web console navigation menu. While in the desired project, click +Add Click All services in the Developer Catalog tile. Click Builder Images under Type to see the available builder images. Note Only image stream tags that have the builder tag listed in their annotations appear in this list, as demonstrated here: kind: "ImageStream" apiVersion: "image.openshift.io/v1" metadata: name: "ruby" creationTimestamp: null spec: # ... tags: - name: "2.6" annotations: description: "Build and run Ruby 2.6 applications" iconClass: "icon-ruby" tags: "builder,ruby" 1 supports: "ruby:2.6,ruby" version: "2.6" # ... 1 Including builder here ensures this image stream tag appears in the web console as a builder. Modify the settings in the new application screen to configure the objects to support your application. 3.1.4. Creating objects from templates by using the CLI You can use the CLI to process templates and use the configuration that is generated to create objects. 3.1.4.1. Adding labels Labels are used to manage and organize generated objects, such as pods. The labels specified in the template are applied to every object that is generated from the template. Procedure Add labels in the template from the command line: USD oc process -f <filename> -l name=otherLabel 3.1.4.2. Listing parameters The list of parameters that you can override are listed in the parameters section of the template. Procedure You can list parameters with the CLI by using the following command and specifying the file to be used: USD oc process --parameters -f <filename> Alternatively, if the template is already uploaded: USD oc process --parameters -n <project> <template_name> For example, the following shows the output when listing the parameters for one of the quick start templates in the default openshift project: USD oc process --parameters -n openshift rails-postgresql-example Example output NAME DESCRIPTION GENERATOR VALUE SOURCE_REPOSITORY_URL The URL of the repository with your application source code https://github.com/sclorg/rails-ex.git SOURCE_REPOSITORY_REF Set this to a branch name, tag or other ref of your repository if you are not using the default branch CONTEXT_DIR Set this to the relative path to your project if it is not in the root of your repository APPLICATION_DOMAIN The exposed hostname that will route to the Rails service rails-postgresql-example.openshiftapps.com GITHUB_WEBHOOK_SECRET A secret string used to configure the GitHub webhook expression [a-zA-Z0-9]{40} SECRET_KEY_BASE Your secret key for verifying the integrity of signed cookies expression [a-z0-9]{127} APPLICATION_USER The application user that is used within the sample application to authorize access on pages openshift APPLICATION_PASSWORD The application password that is used within the sample application to authorize access on pages secret DATABASE_SERVICE_NAME Database service name postgresql POSTGRESQL_USER database username expression user[A-Z0-9]{3} POSTGRESQL_PASSWORD database password expression [a-zA-Z0-9]{8} POSTGRESQL_DATABASE database name root POSTGRESQL_MAX_CONNECTIONS database max connections 10 POSTGRESQL_SHARED_BUFFERS database shared buffers 12MB The output identifies several parameters that are generated with a regular expression-like generator when the template is processed. 3.1.4.3. Generating a list of objects Using the CLI, you can process a file defining a template to return the list of objects to standard output. Procedure Process a file defining a template to return the list of objects to standard output: USD oc process -f <filename> Alternatively, if the template has already been uploaded to the current project: USD oc process <template_name> Create objects from a template by processing the template and piping the output to oc create : USD oc process -f <filename> | oc create -f - Alternatively, if the template has already been uploaded to the current project: USD oc process <template> | oc create -f - You can override any parameter values defined in the file by adding the -p option for each <name>=<value> pair you want to override. A parameter reference appears in any text field inside the template items. For example, in the following the POSTGRESQL_USER and POSTGRESQL_DATABASE parameters of a template are overridden to output a configuration with customized environment variables: Creating a List of objects from a template USD oc process -f my-rails-postgresql \ -p POSTGRESQL_USER=bob \ -p POSTGRESQL_DATABASE=mydatabase The JSON file can either be redirected to a file or applied directly without uploading the template by piping the processed output to the oc create command: USD oc process -f my-rails-postgresql \ -p POSTGRESQL_USER=bob \ -p POSTGRESQL_DATABASE=mydatabase \ | oc create -f - If you have large number of parameters, you can store them in a file and then pass this file to oc process : USD cat postgres.env POSTGRESQL_USER=bob POSTGRESQL_DATABASE=mydatabase USD oc process -f my-rails-postgresql --param-file=postgres.env You can also read the environment from standard input by using "-" as the argument to --param-file : USD sed s/bob/alice/ postgres.env | oc process -f my-rails-postgresql --param-file=- 3.1.5. Modifying uploaded templates You can edit a template that has already been uploaded to your project. Procedure Modify a template that has already been uploaded: USD oc edit template <template> 3.1.6. Writing templates You can define new templates to make it easy to recreate all the objects of your application. The template defines the objects it creates along with some metadata to guide the creation of those objects. The following is an example of a simple template object definition (YAML): apiVersion: template.openshift.io/v1 kind: Template metadata: name: redis-template annotations: description: "Description" iconClass: "icon-redis" tags: "database,nosql" objects: - apiVersion: v1 kind: Pod metadata: name: redis-master spec: containers: - env: - name: REDIS_PASSWORD value: USD{REDIS_PASSWORD} image: dockerfile/redis name: master ports: - containerPort: 6379 protocol: TCP parameters: - description: Password used for Redis authentication from: '[A-Z0-9]{8}' generate: expression name: REDIS_PASSWORD labels: redis: master 3.1.6.1. Writing the template description The template description informs you what the template does and helps you find it when searching in the web console. Additional metadata beyond the template name is optional, but useful to have. In addition to general descriptive information, the metadata also includes a set of tags. Useful tags include the name of the language the template is related to for example, Java, PHP, Ruby, and so on. The following is an example of template description metadata: kind: Template apiVersion: template.openshift.io/v1 metadata: name: cakephp-mysql-example 1 annotations: openshift.io/display-name: "CakePHP MySQL Example (Ephemeral)" 2 description: >- An example CakePHP application with a MySQL database. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/cakephp-ex/blob/master/README.md. WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing." 3 openshift.io/long-description: >- This template defines resources needed to develop a CakePHP application, including a build configuration, application DeploymentConfig, and database DeploymentConfig. The database is stored in non-persistent storage, so this configuration should be used for experimental purposes only. 4 tags: "quickstart,php,cakephp" 5 iconClass: icon-php 6 openshift.io/provider-display-name: "Red Hat, Inc." 7 openshift.io/documentation-url: "https://github.com/sclorg/cakephp-ex" 8 openshift.io/support-url: "https://access.redhat.com" 9 message: "Your admin credentials are USD{ADMIN_USERNAME}:USD{ADMIN_PASSWORD}" 10 1 The unique name of the template. 2 A brief, user-friendly name, which can be employed by user interfaces. 3 A description of the template. Include enough detail that users understand what is being deployed and any caveats they must know before deploying. It should also provide links to additional information, such as a README file. Newlines can be included to create paragraphs. 4 Additional template description. This may be displayed by the service catalog, for example. 5 Tags to be associated with the template for searching and grouping. Add tags that include it into one of the provided catalog categories. Refer to the id and categoryAliases in CATALOG_CATEGORIES in the console constants file. 6 An icon to be displayed with your template in the web console. Example 3.1. Available icons icon-3scale icon-aerogear icon-amq icon-angularjs icon-ansible icon-apache icon-beaker icon-camel icon-capedwarf icon-cassandra icon-catalog-icon icon-clojure icon-codeigniter icon-cordova icon-datagrid icon-datavirt icon-debian icon-decisionserver icon-django icon-dotnet icon-drupal icon-eap icon-elastic icon-erlang icon-fedora icon-freebsd icon-git icon-github icon-gitlab icon-glassfish icon-go-gopher icon-golang icon-grails icon-hadoop icon-haproxy icon-helm icon-infinispan icon-jboss icon-jenkins icon-jetty icon-joomla icon-jruby icon-js icon-knative icon-kubevirt icon-laravel icon-load-balancer icon-mariadb icon-mediawiki icon-memcached icon-mongodb icon-mssql icon-mysql-database icon-nginx icon-nodejs icon-openjdk icon-openliberty icon-openshift icon-openstack icon-other-linux icon-other-unknown icon-perl icon-phalcon icon-php icon-play iconpostgresql icon-processserver icon-python icon-quarkus icon-rabbitmq icon-rails icon-redhat icon-redis icon-rh-integration icon-rh-spring-boot icon-rh-tomcat icon-ruby icon-scala icon-serverlessfx icon-shadowman icon-spring-boot icon-spring icon-sso icon-stackoverflow icon-suse icon-symfony icon-tomcat icon-ubuntu icon-vertx icon-wildfly icon-windows icon-wordpress icon-xamarin icon-zend 7 The name of the person or organization providing the template. 8 A URL referencing further documentation for the template. 9 A URL where support can be obtained for the template. 10 An instructional message that is displayed when this template is instantiated. This field should inform the user how to use the newly created resources. Parameter substitution is performed on the message before being displayed so that generated credentials and other parameters can be included in the output. Include links to any -steps documentation that users should follow. 3.1.6.2. Writing template labels Templates can include a set of labels. These labels are added to each object created when the template is instantiated. Defining a label in this way makes it easy for users to find and manage all the objects created from a particular template. The following is an example of template object labels: kind: "Template" apiVersion: "v1" ... labels: template: "cakephp-mysql-example" 1 app: "USD{NAME}" 2 1 A label that is applied to all objects created from this template. 2 A parameterized label that is also applied to all objects created from this template. Parameter expansion is carried out on both label keys and values. 3.1.6.3. Writing template parameters Parameters allow a value to be supplied by you or generated when the template is instantiated. Then, that value is substituted wherever the parameter is referenced. References can be defined in any field in the objects list field. This is useful for generating random passwords or allowing you to supply a hostname or other user-specific value that is required to customize the template. Parameters can be referenced in two ways: As a string value by placing values in the form USD{PARAMETER_NAME} in any string field in the template. As a JSON or YAML value by placing values in the form USD{{PARAMETER_NAME}} in place of any field in the template. When using the USD{PARAMETER_NAME} syntax, multiple parameter references can be combined in a single field and the reference can be embedded within fixed data, such as "http://USD{PARAMETER_1}USD{PARAMETER_2}" . Both parameter values are substituted and the resulting value is a quoted string. When using the USD{{PARAMETER_NAME}} syntax only a single parameter reference is allowed and leading and trailing characters are not permitted. The resulting value is unquoted unless, after substitution is performed, the result is not a valid JSON object. If the result is not a valid JSON value, the resulting value is quoted and treated as a standard string. A single parameter can be referenced multiple times within a template and it can be referenced using both substitution syntaxes within a single template. A default value can be provided, which is used if you do not supply a different value: The following is an example of setting an explicit value as the default value: parameters: - name: USERNAME description: "The user name for Joe" value: joe Parameter values can also be generated based on rules specified in the parameter definition, for example generating a parameter value: parameters: - name: PASSWORD description: "The random user password" generate: expression from: "[a-zA-Z0-9]{12}" In the example, processing generates a random password 12 characters long consisting of all upper and lowercase alphabet letters and numbers. The syntax available is not a full regular expression syntax. However, you can use \w , \d , \a , and \A modifiers: [\w]{10} produces 10 alphabet characters, numbers, and underscores. This follows the PCRE standard and is equal to [a-zA-Z0-9_]{10} . [\d]{10} produces 10 numbers. This is equal to [0-9]{10} . [\a]{10} produces 10 alphabetical characters. This is equal to [a-zA-Z]{10} . [\A]{10} produces 10 punctuation or symbol characters. This is equal to [~!@#USD%\^&*()\-_+={}\[\]\\|<,>.?/"';:`]{10} . Note Depending on if the template is written in YAML or JSON, and the type of string that the modifier is embedded within, you might need to escape the backslash with a second backslash. The following examples are equivalent: Example YAML template with a modifier parameters: - name: singlequoted_example generate: expression from: '[\A]{10}' - name: doublequoted_example generate: expression from: "[\\A]{10}" Example JSON template with a modifier { "parameters": [ { "name": "json_example", "generate": "expression", "from": "[\\A]{10}" } ] } Here is an example of a full template with parameter definitions and references: kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: cakephp-mysql-example annotations: description: Defines how to build the application spec: source: type: Git git: uri: "USD{SOURCE_REPOSITORY_URL}" 1 ref: "USD{SOURCE_REPOSITORY_REF}" contextDir: "USD{CONTEXT_DIR}" - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: replicas: "USD{{REPLICA_COUNT}}" 2 parameters: - name: SOURCE_REPOSITORY_URL 3 displayName: Source Repository URL 4 description: The URL of the repository with your application source code 5 value: https://github.com/sclorg/cakephp-ex.git 6 required: true 7 - name: GITHUB_WEBHOOK_SECRET description: A secret string used to configure the GitHub webhook generate: expression 8 from: "[a-zA-Z0-9]{40}" 9 - name: REPLICA_COUNT description: Number of replicas to run value: "2" required: true message: "... The GitHub webhook secret is USD{GITHUB_WEBHOOK_SECRET} ..." 10 1 This value is replaced with the value of the SOURCE_REPOSITORY_URL parameter when the template is instantiated. 2 This value is replaced with the unquoted value of the REPLICA_COUNT parameter when the template is instantiated. 3 The name of the parameter. This value is used to reference the parameter within the template. 4 The user-friendly name for the parameter. This is displayed to users. 5 A description of the parameter. Provide more detailed information for the purpose of the parameter, including any constraints on the expected value. Descriptions should use complete sentences to follow the console's text standards. Do not make this a duplicate of the display name. 6 A default value for the parameter which is used if you do not override the value when instantiating the template. Avoid using default values for things like passwords, instead use generated parameters in combination with secrets. 7 Indicates this parameter is required, meaning you cannot override it with an empty value. If the parameter does not provide a default or generated value, you must supply a value. 8 A parameter which has its value generated. 9 The input to the generator. In this case, the generator produces a 40 character alphanumeric value including upper and lowercase characters. 10 Parameters can be included in the template message. This informs you about generated values. 3.1.6.4. Writing the template object list The main portion of the template is the list of objects which is created when the template is instantiated. This can be any valid API object, such as a build configuration, deployment configuration, or service. The object is created exactly as defined here, with any parameter values substituted in prior to creation. The definition of these objects can reference parameters defined earlier. The following is an example of an object list: kind: "Template" apiVersion: "v1" metadata: name: my-template objects: - kind: "Service" 1 apiVersion: "v1" metadata: name: "cakephp-mysql-example" annotations: description: "Exposes and load balances the application pods" spec: ports: - name: "web" port: 8080 targetPort: 8080 selector: name: "cakephp-mysql-example" 1 The definition of a service, which is created by this template. Note If an object definition metadata includes a fixed namespace field value, the field is stripped out of the definition during template instantiation. If the namespace field contains a parameter reference, normal parameter substitution is performed and the object is created in whatever namespace the parameter substitution resolved the value to, assuming the user has permission to create objects in that namespace. 3.1.6.5. Marking a template as bindable The Template Service Broker advertises one service in its catalog for each template object of which it is aware. By default, each of these services is advertised as being bindable, meaning an end user is permitted to bind against the provisioned service. Procedure Template authors can prevent end users from binding against services provisioned from a given template. Prevent end user from binding against services provisioned from a given template by adding the annotation template.openshift.io/bindable: "false" to the template. 3.1.6.6. Exposing template object fields Template authors can indicate that fields of particular objects in a template should be exposed. The Template Service Broker recognizes exposed fields on ConfigMap , Secret , Service , and Route objects, and returns the values of the exposed fields when a user binds a service backed by the broker. To expose one or more fields of an object, add annotations prefixed by template.openshift.io/expose- or template.openshift.io/base64-expose- to the object in the template. Each annotation key, with its prefix removed, is passed through to become a key in a bind response. Each annotation value is a Kubernetes JSONPath expression, which is resolved at bind time to indicate the object field whose value should be returned in the bind response. Note Bind response key-value pairs can be used in other parts of the system as environment variables. Therefore, it is recommended that every annotation key with its prefix removed should be a valid environment variable name - beginning with a character A-Z , a-z , or _ , and being followed by zero or more characters A-Z , a-z , 0-9 , or _ . Note Unless escaped with a backslash, Kubernetes' JSONPath implementation interprets characters such as . , @ , and others as metacharacters, regardless of their position in the expression. Therefore, for example, to refer to a ConfigMap datum named my.key , the required JSONPath expression would be {.data['my\.key']} . Depending on how the JSONPath expression is then written in YAML, an additional backslash might be required, for example "{.data['my\\.key']}" . The following is an example of different objects' fields being exposed: kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: ConfigMap apiVersion: v1 metadata: name: my-template-config annotations: template.openshift.io/expose-username: "{.data['my\\.username']}" data: my.username: foo - kind: Secret apiVersion: v1 metadata: name: my-template-config-secret annotations: template.openshift.io/base64-expose-password: "{.data['password']}" stringData: password: <password> - kind: Service apiVersion: v1 metadata: name: my-template-service annotations: template.openshift.io/expose-service_ip_port: "{.spec.clusterIP}:{.spec.ports[?(.name==\"web\")].port}" spec: ports: - name: "web" port: 8080 - kind: Route apiVersion: route.openshift.io/v1 metadata: name: my-template-route annotations: template.openshift.io/expose-uri: "http://{.spec.host}{.spec.path}" spec: path: mypath An example response to a bind operation given the above partial template follows: { "credentials": { "username": "foo", "password": "YmFy", "service_ip_port": "172.30.12.34:8080", "uri": "http://route-test.router.default.svc.cluster.local/mypath" } } Procedure Use the template.openshift.io/expose- annotation to return the field value as a string. This is convenient, although it does not handle arbitrary binary data. If you want to return binary data, use the template.openshift.io/base64-expose- annotation instead to base64 encode the data before it is returned. 3.1.6.7. Waiting for template readiness Template authors can indicate that certain objects within a template should be waited for before a template instantiation by the service catalog, Template Service Broker, or TemplateInstance API is considered complete. To use this feature, mark one or more objects of kind Build , BuildConfig , Deployment , DeploymentConfig , Job , or StatefulSet in a template with the following annotation: "template.alpha.openshift.io/wait-for-ready": "true" Template instantiation is not complete until all objects marked with the annotation report ready. Similarly, if any of the annotated objects report failed, or if the template fails to become ready within a fixed timeout of one hour, the template instantiation fails. For the purposes of instantiation, readiness and failure of each object kind are defined as follows: Kind Readiness Failure Build Object reports phase complete. Object reports phase canceled, error, or failed. BuildConfig Latest associated build object reports phase complete. Latest associated build object reports phase canceled, error, or failed. Deployment Object reports new replica set and deployment available. This honors readiness probes defined on the object. Object reports progressing condition as false. DeploymentConfig Object reports new replication controller and deployment available. This honors readiness probes defined on the object. Object reports progressing condition as false. Job Object reports completion. Object reports that one or more failures have occurred. StatefulSet Object reports all replicas ready. This honors readiness probes defined on the object. Not applicable. The following is an example template extract, which uses the wait-for-ready annotation. Further examples can be found in the Red Hat OpenShift Service on AWS quick start templates. kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: ... annotations: # wait-for-ready used on BuildConfig ensures that template instantiation # will fail immediately if build fails template.alpha.openshift.io/wait-for-ready: "true" spec: ... - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: ... annotations: template.alpha.openshift.io/wait-for-ready: "true" spec: ... - kind: Service apiVersion: v1 metadata: name: ... spec: ... Additional recommendations Set memory, CPU, and storage default sizes to make sure your application is given enough resources to run smoothly. Avoid referencing the latest tag from images if that tag is used across major versions. This can cause running applications to break when new images are pushed to that tag. A good template builds and deploys cleanly without requiring modifications after the template is deployed. 3.1.6.8. Creating a template from existing objects Rather than writing an entire template from scratch, you can export existing objects from your project in YAML form, and then modify the YAML from there by adding parameters and other customizations as template form. Procedure Export objects in a project in YAML form: USD oc get -o yaml all > <yaml_filename> You can also substitute a particular resource type or multiple resources instead of all . Run oc get -h for more examples. The object types included in oc get -o yaml all are: BuildConfig Build DeploymentConfig ImageStream Pod ReplicationController Route Service Note Using the all alias is not recommended because the contents might vary across different clusters and versions. Instead, specify all required resources. 3.2. Creating applications by using the Developer perspective The Developer perspective in the web console provides you the following options from the +Add view to create applications and associated services and deploy them on Red Hat OpenShift Service on AWS: Getting started resources : Use these resources to help you get started with Developer Console. You can choose to hide the header using the Options menu . Creating applications using samples : Use existing code samples to get started with creating applications on the Red Hat OpenShift Service on AWS. Build with guided documentation : Follow the guided documentation to build applications and familiarize yourself with key concepts and terminologies. Explore new developer features : Explore the new features and resources within the Developer perspective. Developer catalog : Explore the Developer Catalog to select the required applications, services, or source to image builders, and then add it to your project. All Services : Browse the catalog to discover services across Red Hat OpenShift Service on AWS. Database : Select the required database service and add it to your application. Operator Backed : Select and deploy the required Operator-managed service. Helm chart : Select the required Helm chart to simplify deployment of applications and services. Devfile : Select a devfile from the Devfile registry to declaratively define a development environment. Event Source : Select an event source to register interest in a class of events from a particular system. Note The Managed services option is also available if the RHOAS Operator is installed. Git repository : Import an existing codebase, Devfile, or Dockerfile from your Git repository using the From Git , From Devfile , or From Dockerfile options respectively, to build and deploy an application on Red Hat OpenShift Service on AWS. Container images : Use existing images from an image stream or registry to deploy it on to the Red Hat OpenShift Service on AWS. Pipelines : Use Tekton pipeline to create CI/CD pipelines for your software delivery process on the Red Hat OpenShift Service on AWS. Serverless : Explore the Serverless options to create, build, and deploy stateless and serverless applications on the Red Hat OpenShift Service on AWS. Channel : Create a Knative channel to create an event forwarding and persistence layer with in-memory and reliable implementations. Samples : Explore the available sample applications to create, build, and deploy an application quickly. Quick Starts : Explore the quick start options to create, import, and run applications with step-by-step instructions and tasks. From Local Machine : Explore the From Local Machine tile to import or upload files on your local machine for building and deploying applications easily. Import YAML : Upload a YAML file to create and define resources for building and deploying applications. Upload JAR file : Upload a JAR file to build and deploy Java applications. Share my Project : Use this option to add or remove users to a project and provide accessibility options to them. Helm Chart repositories : Use this option to add Helm Chart repositories in a namespace. Re-ordering of resources : Use these resources to re-order pinned resources added to your navigation pane. The drag-and-drop icon is displayed on the left side of the pinned resource when you hover over it in the navigation pane. The dragged resource can be dropped only in the section where it resides. Note that the Pipelines option is displayed only when the OpenShift Pipelines Operator is installed. 3.2.1. Prerequisites To create applications using the Developer perspective ensure that: You have logged in to the web console. 3.2.2. Creating sample applications You can use the sample applications in the +Add flow of the Developer perspective to create, build, and deploy applications quickly. Prerequisites You have logged in to the Red Hat OpenShift Service on AWS web console and are in the Developer perspective. Procedure In the +Add view, click the Samples tile to see the Samples page. On the Samples page, select one of the available sample applications to see the Create Sample Application form. In the Create Sample Application Form : In the Name field, the deployment name is displayed by default. You can modify this name as required. In the Builder Image Version , a builder image is selected by default. You can modify this image version by using the Builder Image Version drop-down list. A sample Git repository URL is added by default. Click Create to create the sample application. The build status of the sample application is displayed on the Topology view. After the sample application is created, you can see the deployment added to the application. 3.2.3. Creating applications by using Quick Starts The Quick Starts page shows you how to create, import, and run applications on Red Hat OpenShift Service on AWS, with step-by-step instructions and tasks. Prerequisites You have logged in to the Red Hat OpenShift Service on AWS web console and are in the Developer perspective. Procedure In the +Add view, click the Getting Started resources Build with guided documentation View all quick starts link to view the Quick Starts page. In the Quick Starts page, click the tile for the quick start that you want to use. Click Start to begin the quick start. Perform the steps that are displayed. 3.2.4. Importing a codebase from Git to create an application You can use the Developer perspective to create, build, and deploy an application on Red Hat OpenShift Service on AWS using an existing codebase in GitHub. The following procedure walks you through the From Git option in the Developer perspective to create an application. Procedure In the +Add view, click From Git in the Git Repository tile to see the Import from git form. In the Git section, enter the Git repository URL for the codebase you want to use to create an application. For example, enter the URL of this sample Node.js application https://github.com/sclorg/nodejs-ex . The URL is then validated. Optional: You can click Show Advanced Git Options to add details such as: Git Reference to point to code in a specific branch, tag, or commit to be used to build the application. Context Dir to specify the subdirectory for the application source code you want to use to build the application. Source Secret to create a Secret Name with credentials for pulling your source code from a private repository. Optional: You can import a Devfile , a Dockerfile , Builder Image , or a Serverless Function through your Git repository to further customize your deployment. If your Git repository contains a Devfile , a Dockerfile , a Builder Image , or a func.yaml , it is automatically detected and populated on the respective path fields. If a Devfile , a Dockerfile , or a Builder Image are detected in the same repository, the Devfile is selected by default. If func.yaml is detected in the Git repository, the Import Strategy changes to Serverless Function . Alternatively, you can create a serverless function by clicking Create Serverless function in the +Add view using the Git repository URL. To edit the file import type and select a different strategy, click Edit import strategy option. If multiple Devfiles , a Dockerfiles , or a Builder Images are detected, to import a specific instance, specify the respective paths relative to the context directory. After the Git URL is validated, the recommended builder image is selected and marked with a star. If the builder image is not auto-detected, select a builder image. For the https://github.com/sclorg/nodejs-ex Git URL, by default the Node.js builder image is selected. Optional: Use the Builder Image Version drop-down to specify a version. Optional: Use the Edit import strategy to select a different strategy. Optional: For the Node.js builder image, use the Run command field to override the command to run the application. In the General section: In the Application field, enter a unique name for the application grouping, for example, myapp . Ensure that the application name is unique in a namespace. The Name field to identify the resources created for this application is automatically populated based on the Git repository URL if there are no existing applications. If there are existing applications, you can choose to deploy the component within an existing application, create a new application, or keep the component unassigned. Note The resource name must be unique in a namespace. Modify the resource name if you get an error. In the Resources section, select: Deployment , to create an application in plain Kubernetes style. Deployment Config , to create an Red Hat OpenShift Service on AWS style application. In the Pipelines section, select Add Pipeline , and then click Show Pipeline Visualization to see the pipeline for the application. A default pipeline is selected, but you can choose the pipeline you want from the list of available pipelines for the application. Note The Add pipeline checkbox is checked and Configure PAC is selected by default if the following criterias are fulfilled: Pipeline operator is installed pipelines-as-code is enabled .tekton directory is detected in the Git repository Add a webhook to your repository. If Configure PAC is checked and the GitHub App is set up, you can see the Use GitHub App and Setup a webhook options. If GitHub App is not set up, you can only see the Setup a webhook option: Go to Settings Webhooks and click Add webhook . Set the Payload URL to the Pipelines as Code controller public URL. Select the content type as application/json . Add a webhook secret and note it in an alternate location. With openssl installed on your local machine, generate a random secret. Click Let me select individual events and select these events: Commit comments , Issue comments , Pull request , and Pushes . Click Add webhook . Optional: In the Advanced Options section, the Target port and the Create a route to the application is selected by default so that you can access your application using a publicly available URL. If your application does not expose its data on the default public port, 80, clear the check box, and set the target port number you want to expose. Optional: You can use the following advanced options to further customize your application: Routing By clicking the Routing link, you can perform the following actions: Customize the hostname for the route. Specify the path the router watches. Select the target port for the traffic from the drop-down list. Secure your route by selecting the Secure Route check box. Select the required TLS termination type and set a policy for insecure traffic from the respective drop-down lists. Note For serverless applications, the Knative service manages all the routing options above. However, you can customize the target port for traffic, if required. If the target port is not specified, the default port of 8080 is used. Health Checks Click the Health Checks link to add Readiness, Liveness, and Startup probes to your application. All the probes have prepopulated default data; you can add the probes with the default data or customize it as required. To customize the health probes: Click Add Readiness Probe , if required, modify the parameters to check if the container is ready to handle requests, and select the check mark to add the probe. Click Add Liveness Probe , if required, modify the parameters to check if a container is still running, and select the check mark to add the probe. Click Add Startup Probe , if required, modify the parameters to check if the application within the container has started, and select the check mark to add the probe. For each of the probes, you can specify the request type - HTTP GET , Container Command , or TCP Socket , from the drop-down list. The form changes as per the selected request type. You can then modify the default values for the other parameters, such as the success and failure thresholds for the probe, number of seconds before performing the first probe after the container starts, frequency of the probe, and the timeout value. Build Configuration and Deployment Click the Build Configuration and Deployment links to see the respective configuration options. Some options are selected by default; you can customize them further by adding the necessary triggers and environment variables. For serverless applications, the Deployment option is not displayed as the Knative configuration resource maintains the desired state for your deployment instead of a DeploymentConfig resource. Scaling Click the Scaling link to define the number of pods or instances of the application you want to deploy initially. If you are creating a serverless deployment, you can also configure the following settings: Min Pods determines the lower limit for the number of pods that must be running at any given time for a Knative service. This is also known as the minScale setting. Max Pods determines the upper limit for the number of pods that can be running at any given time for a Knative service. This is also known as the maxScale setting. Concurrency target determines the number of concurrent requests desired for each instance of the application at a given time. Concurrency limit determines the limit for the number of concurrent requests allowed for each instance of the application at a given time. Concurrency utilization determines the percentage of the concurrent requests limit that must be met before Knative scales up additional pods to handle additional traffic. Autoscale window defines the time window over which metrics are averaged to provide input for scaling decisions when the autoscaler is not in panic mode. A service is scaled-to-zero if no requests are received during this window. The default duration for the autoscale window is 60s . This is also known as the stable window. Resource Limit Click the Resource Limit link to set the amount of CPU and Memory resources a container is guaranteed or allowed to use when running. Labels Click the Labels link to add custom labels to your application. Click Create to create the application and a success notification is displayed. You can see the build status of the application in the Topology view. 3.2.5. Creating applications by deploying container image You can use an external image registry or an image stream tag from an internal registry to deploy an application on your cluster. Prerequisites You have logged in to the Red Hat OpenShift Service on AWS web console and are in the Developer perspective. Procedure In the +Add view, click Container images to view the Deploy Images page. In the Image section: Select Image name from external registry to deploy an image from a public or a private registry, or select Image stream tag from internal registry to deploy an image from an internal registry. Select an icon for your image in the Runtime icon tab. In the General section: In the Application name field, enter a unique name for the application grouping. In the Name field, enter a unique name to identify the resources created for this component. In the Resource type section, select the resource type to generate: Select Deployment to enable declarative updates for Pod and ReplicaSet objects. Select DeploymentConfig to define the template for a Pod object, and manage deploying new images and configuration sources. Click Create . You can view the build status of the application in the Topology view. 3.2.6. Deploying a Java application by uploading a JAR file You can use the web console Developer perspective to upload a JAR file by using the following options: Navigate to the +Add view of the Developer perspective, and click Upload JAR file in the From Local Machine tile. Browse and select your JAR file, or drag a JAR file to deploy your application. Navigate to the Topology view and use the Upload JAR file option, or drag a JAR file to deploy your application. Use the in-context menu in the Topology view, and then use the Upload JAR file option to upload your JAR file to deploy your application. Prerequisites The Cluster Samples Operator must be installed by a user with the dedicated-admin role. You have access to the Red Hat OpenShift Service on AWS web console and are in the Developer perspective. Procedure In the Topology view, right-click anywhere to view the Add to Project menu. Hover over the Add to Project menu to see the menu options, and then select the Upload JAR file option to see the Upload JAR file form. Alternatively, you can drag the JAR file into the Topology view. In the JAR file field, browse for the required JAR file on your local machine and upload it. Alternatively, you can drag the JAR file on to the field. A toast alert is displayed at the top right if an incompatible file type is dragged into the Topology view. A field error is displayed if an incompatible file type is dropped on the field in the upload form. The runtime icon and builder image are selected by default. If a builder image is not auto-detected, select a builder image. If required, you can change the version using the Builder Image Version drop-down list. Optional: In the Application Name field, enter a unique name for your application to use for resource labelling. In the Name field, enter a unique component name for the associated resources. Optional: Use the Resource type drop-down list to change the resource type. In the Advanced options menu, click Create a Route to the Application to configure a public URL for your deployed application. Click Create to deploy the application. A toast notification is shown to notify you that the JAR file is being uploaded. The toast notification also includes a link to view the build logs. Note If you attempt to close the browser tab while the build is running, a web alert is displayed. After the JAR file is uploaded and the application is deployed, you can view the application in the Topology view. 3.2.7. Using the Devfile registry to access devfiles You can use the devfiles in the +Add flow of the Developer perspective to create an application. The +Add flow provides a complete integration with the devfile community registry . A devfile is a portable YAML file that describes your development environment without needing to configure it from scratch. Using the Devfile registry , you can use a preconfigured devfile to create an application. Procedure Navigate to Developer Perspective +Add Developer Catalog All Services . A list of all the available services in the Developer Catalog is displayed. Under Type , click Devfiles to browse for devfiles that support a particular language or framework. Alternatively, you can use the keyword filter to search for a particular devfile using their name, tag, or description. Click the devfile you want to use to create an application. The devfile tile displays the details of the devfile, including the name, description, provider, and the documentation of the devfile. Click Create to create an application and view the application in the Topology view. 3.2.8. Using the Developer Catalog to add services or components to your application You use the Developer Catalog to deploy applications and services based on Operator backed services such as Databases, Builder Images, and Helm Charts. The Developer Catalog contains a collection of application components, services, event sources, or source-to-image builders that you can add to your project. Cluster administrators can customize the content made available in the catalog. Procedure In the Developer perspective, navigate to the +Add view and from the Developer Catalog tile, click All Services to view all the available services in the Developer Catalog . Under All Services , select the kind of service or the component you need to add to your project. For this example, select Databases to list all the database services and then click MariaDB to see the details for the service. Click Instantiate Template to see an automatically populated template with details for the MariaDB service, and then click Create to create and view the MariaDB service in the Topology view. Figure 3.1. MariaDB in Topology 3.2.9. Additional resources For more information about Knative routing settings for OpenShift Serverless, see Routing . For more information about domain mapping settings for OpenShift Serverless, see Configuring a custom domain for a Knative service . For more information about Knative autoscaling settings for OpenShift Serverless, see Autoscaling . For more information about adding a new user to a project, see Working with projects . For more information about creating a Helm Chart repository, see Creating Helm Chart repositories . 3.3. Creating applications from installed Operators Operators are a method of packaging, deploying, and managing a Kubernetes application. You can create applications on Red Hat OpenShift Service on AWS using Operators that have been installed by a cluster administrator. This guide walks developers through an example of creating applications from an installed Operator using the Red Hat OpenShift Service on AWS web console. 3.3.1. Creating an etcd cluster using an Operator This procedure walks through creating a new etcd cluster using the etcd Operator, managed by Operator Lifecycle Manager (OLM). Prerequisites Access to an Red Hat OpenShift Service on AWS cluster. The etcd Operator already installed cluster-wide by an administrator. Procedure Create a new project in the Red Hat OpenShift Service on AWS web console for this procedure. This example uses a project called my-etcd . Navigate to the Operators Installed Operators page. The Operators that have been installed to the cluster by the dedicated-admin and are available for use are shown here as a list of cluster service versions (CSVs). CSVs are used to launch and manage the software provided by the Operator. Tip You can get this list from the CLI using: USD oc get csv On the Installed Operators page, click the etcd Operator to view more details and available actions. As shown under Provided APIs , this Operator makes available three new resource types, including one for an etcd Cluster (the EtcdCluster resource). These objects work similar to the built-in native Kubernetes ones, such as Deployment or ReplicaSet , but contain logic specific to managing etcd. Create a new etcd cluster: In the etcd Cluster API box, click Create instance . The page allows you to make any modifications to the minimal starting template of an EtcdCluster object, such as the size of the cluster. For now, click Create to finalize. This triggers the Operator to start up the pods, services, and other components of the new etcd cluster. Click the example etcd cluster, then click the Resources tab to see that your project now contains a number of resources created and configured automatically by the Operator. Verify that a Kubernetes service has been created that allows you to access the database from other pods in your project. All users with the edit role in a given project can create, manage, and delete application instances (an etcd cluster, in this example) managed by Operators that have already been created in the project, in a self-service manner, just like a cloud service. If you want to enable additional users with this ability, project administrators can add the role using the following command: USD oc policy add-role-to-user edit <user> -n <target_project> You now have an etcd cluster that will react to failures and rebalance data as pods become unhealthy or are migrated between nodes in the cluster. Most importantly, dedicated-admins or developers with proper access can now easily use the database with their applications. 3.4. Creating applications by using the CLI You can create an Red Hat OpenShift Service on AWS application from components that include source or binary code, images, and templates by using the Red Hat OpenShift Service on AWS CLI. The set of objects created by new-app depends on the artifacts passed as input: source repositories, images, or templates. 3.4.1. Creating an application from source code With the new-app command you can create applications from source code in a local or remote Git repository. The new-app command creates a build configuration, which itself creates a new application image from your source code. The new-app command typically also creates a Deployment object to deploy the new image, and a service to provide load-balanced access to the deployment running your image. Red Hat OpenShift Service on AWS automatically detects whether the pipeline, source, or docker build strategy should be used, and in the case of source build, detects an appropriate language builder image. 3.4.1.1. Local To create an application from a Git repository in a local directory: USD oc new-app /<path to source code> Note If you use a local Git repository, the repository must have a remote named origin that points to a URL that is accessible by the Red Hat OpenShift Service on AWS cluster. If there is no recognized remote, running the new-app command will create a binary build. 3.4.1.2. Remote To create an application from a remote Git repository: USD oc new-app https://github.com/sclorg/cakephp-ex To create an application from a private remote Git repository: USD oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret Note If you use a private remote Git repository, you can use the --source-secret flag to specify an existing source clone secret that will get injected into your build config to access the repository. You can use a subdirectory of your source code repository by specifying a --context-dir flag. To create an application from a remote Git repository and a context subdirectory: USD oc new-app https://github.com/sclorg/s2i-ruby-container.git \ --context-dir=2.0/test/puma-test-app Also, when specifying a remote URL, you can specify a Git branch to use by appending #<branch_name> to the end of the URL: USD oc new-app https://github.com/openshift/ruby-hello-world.git#beta4 3.4.1.3. Build strategy detection Red Hat OpenShift Service on AWS automatically determines which build strategy to use by detecting certain files: If a Jenkins file exists in the root or specified context directory of the source repository when creating a new application, Red Hat OpenShift Service on AWS generates a pipeline build strategy. Note The pipeline build strategy is deprecated; consider using Red Hat OpenShift Pipelines instead. If a Dockerfile exists in the root or specified context directory of the source repository when creating a new application, Red Hat OpenShift Service on AWS generates a docker build strategy. If neither a Jenkins file nor a Dockerfile is detected, Red Hat OpenShift Service on AWS generates a source build strategy. Override the automatically detected build strategy by setting the --strategy flag to docker , pipeline , or source . USD oc new-app /home/user/code/myapp --strategy=docker Note The oc command requires that files containing build sources are available in a remote Git repository. For all source builds, you must use git remote -v . 3.4.1.4. Language detection If you use the source build strategy, new-app attempts to determine the language builder to use by the presence of certain files in the root or specified context directory of the repository: Table 3.1. Languages detected by new-app Language Files jee pom.xml nodejs app.json , package.json perl cpanfile , index.pl php composer.json , index.php python requirements.txt , setup.py ruby Gemfile , Rakefile , config.ru scala build.sbt golang Godeps , main.go After a language is detected, new-app searches the Red Hat OpenShift Service on AWS server for image stream tags that have a supports annotation matching the detected language, or an image stream that matches the name of the detected language. If a match is not found, new-app searches the Docker Hub registry for an image that matches the detected language based on name. You can override the image the builder uses for a particular source repository by specifying the image, either an image stream or container specification, and the repository with a ~ as a separator. Note that if this is done, build strategy detection and language detection are not carried out. For example, to use the myproject/my-ruby imagestream with the source in a remote repository: USD oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git To use the openshift/ruby-20-centos7:latest container image stream with the source in a local repository: USD oc new-app openshift/ruby-20-centos7:latest~/home/user/code/my-ruby-app Note Language detection requires the Git client to be locally installed so that your repository can be cloned and inspected. If Git is not available, you can avoid the language detection step by specifying the builder image to use with your repository with the <image>~<repository> syntax. The -i <image> <repository> invocation requires that new-app attempt to clone repository to determine what type of artifact it is, so this will fail if Git is not available. The -i <image> --code <repository> invocation requires new-app clone repository to determine whether image should be used as a builder for the source code, or deployed separately, as in the case of a database image. 3.4.2. Creating an application from an image You can deploy an application from an existing image. Images can come from image streams in the Red Hat OpenShift Service on AWS server, images in a specific registry, or images in the local Docker server. The new-app command attempts to determine the type of image specified in the arguments passed to it. However, you can explicitly tell new-app whether the image is a container image using the --docker-image argument or an image stream using the -i|--image-stream argument. Note If you specify an image from your local Docker repository, you must ensure that the same image is available to the Red Hat OpenShift Service on AWS cluster nodes. 3.4.2.1. Docker Hub MySQL image Create an application from the Docker Hub MySQL image, for example: USD oc new-app mysql 3.4.2.2. Image in a private registry Create an application using an image in a private registry, specify the full container image specification: USD oc new-app myregistry:5000/example/myimage 3.4.2.3. Existing image stream and optional image stream tag Create an application from an existing image stream and optional image stream tag: USD oc new-app my-stream:v1 3.4.3. Creating an application from a template You can create an application from a previously stored template or from a template file, by specifying the name of the template as an argument. For example, you can store a sample application template and use it to create an application. Upload an application template to your current project's template library. The following example uploads an application template from a file called examples/sample-app/application-template-stibuild.json : USD oc create -f examples/sample-app/application-template-stibuild.json Then create a new application by referencing the application template. In this example, the template name is ruby-helloworld-sample : USD oc new-app ruby-helloworld-sample To create a new application by referencing a template file in your local file system, without first storing it in Red Hat OpenShift Service on AWS, use the -f|--file argument. For example: USD oc new-app -f examples/sample-app/application-template-stibuild.json 3.4.3.1. Template parameters When creating an application based on a template, use the -p|--param argument to set parameter values that are defined by the template: USD oc new-app ruby-helloworld-sample \ -p ADMIN_USERNAME=admin -p ADMIN_PASSWORD=mypassword You can store your parameters in a file, then use that file with --param-file when instantiating a template. If you want to read the parameters from standard input, use --param-file=- . The following is an example file called helloworld.params : ADMIN_USERNAME=admin ADMIN_PASSWORD=mypassword Reference the parameters in the file when instantiating a template: USD oc new-app ruby-helloworld-sample --param-file=helloworld.params 3.4.4. Modifying application creation The new-app command generates Red Hat OpenShift Service on AWS objects that build, deploy, and run the application that is created. Normally, these objects are created in the current project and assigned names that are derived from the input source repositories or the input images. However, with new-app you can modify this behavior. Table 3.2. new-app output objects Object Description BuildConfig A BuildConfig object is created for each source repository that is specified in the command line. The BuildConfig object specifies the strategy to use, the source location, and the build output location. ImageStreams For the BuildConfig object, two image streams are usually created. One represents the input image. With source builds, this is the builder image. With Docker builds, this is the FROM image. The second one represents the output image. If a container image was specified as input to new-app , then an image stream is created for that image as well. DeploymentConfig A DeploymentConfig object is created either to deploy the output of a build, or a specified image. The new-app command creates emptyDir volumes for all Docker volumes that are specified in containers included in the resulting DeploymentConfig object . Service The new-app command attempts to detect exposed ports in input images. It uses the lowest numeric exposed port to generate a service that exposes that port. To expose a different port, after new-app has completed, simply use the oc expose command to generate additional services. Other Other objects can be generated when instantiating templates, according to the template. 3.4.4.1. Specifying environment variables When generating applications from a template, source, or an image, you can use the -e|--env argument to pass environment variables to the application container at run time: USD oc new-app openshift/postgresql-92-centos7 \ -e POSTGRESQL_USER=user \ -e POSTGRESQL_DATABASE=db \ -e POSTGRESQL_PASSWORD=password The variables can also be read from file using the --env-file argument. The following is an example file called postgresql.env : POSTGRESQL_USER=user POSTGRESQL_DATABASE=db POSTGRESQL_PASSWORD=password Read the variables from the file: USD oc new-app openshift/postgresql-92-centos7 --env-file=postgresql.env Additionally, environment variables can be given on standard input by using --env-file=- : USD cat postgresql.env | oc new-app openshift/postgresql-92-centos7 --env-file=- Note Any BuildConfig objects created as part of new-app processing are not updated with environment variables passed with the -e|--env or --env-file argument. 3.4.4.2. Specifying build environment variables When generating applications from a template, source, or an image, you can use the --build-env argument to pass environment variables to the build container at run time: USD oc new-app openshift/ruby-23-centos7 \ --build-env HTTP_PROXY=http://myproxy.net:1337/ \ --build-env GEM_HOME=~/.gem The variables can also be read from a file using the --build-env-file argument. The following is an example file called ruby.env : HTTP_PROXY=http://myproxy.net:1337/ GEM_HOME=~/.gem Read the variables from the file: USD oc new-app openshift/ruby-23-centos7 --build-env-file=ruby.env Additionally, environment variables can be given on standard input by using --build-env-file=- : USD cat ruby.env | oc new-app openshift/ruby-23-centos7 --build-env-file=- 3.4.4.3. Specifying labels When generating applications from source, images, or templates, you can use the -l|--label argument to add labels to the created objects. Labels make it easy to collectively select, configure, and delete objects associated with the application. USD oc new-app https://github.com/openshift/ruby-hello-world -l name=hello-world 3.4.4.4. Viewing the output without creation To see a dry-run of running the new-app command, you can use the -o|--output argument with a yaml or json value. You can then use the output to preview the objects that are created or redirect it to a file that you can edit. After you are satisfied, you can use oc create to create the Red Hat OpenShift Service on AWS objects. To output new-app artifacts to a file, run the following: USD oc new-app https://github.com/openshift/ruby-hello-world \ -o yaml > myapp.yaml Edit the file: USD vi myapp.yaml Create a new application by referencing the file: USD oc create -f myapp.yaml 3.4.4.5. Creating objects with different names Objects created by new-app are normally named after the source repository, or the image used to generate them. You can set the name of the objects produced by adding a --name flag to the command: USD oc new-app https://github.com/openshift/ruby-hello-world --name=myapp 3.4.4.6. Creating objects in a different project Normally, new-app creates objects in the current project. However, you can create objects in a different project by using the -n|--namespace argument: USD oc new-app https://github.com/openshift/ruby-hello-world -n myproject 3.4.4.7. Creating multiple objects The new-app command allows creating multiple applications specifying multiple parameters to new-app . Labels specified in the command line apply to all objects created by the single command. Environment variables apply to all components created from source or images. To create an application from a source repository and a Docker Hub image: USD oc new-app https://github.com/openshift/ruby-hello-world mysql Note If a source code repository and a builder image are specified as separate arguments, new-app uses the builder image as the builder for the source code repository. If this is not the intent, specify the required builder image for the source using the ~ separator. 3.4.4.8. Grouping images and source in a single pod The new-app command allows deploying multiple images together in a single pod. To specify which images to group together, use the + separator. The --group command line argument can also be used to specify the images that should be grouped together. To group the image built from a source repository with other images, specify its builder image in the group: USD oc new-app ruby+mysql To deploy an image built from source and an external image together: USD oc new-app \ ruby~https://github.com/openshift/ruby-hello-world \ mysql \ --group=ruby+mysql 3.4.4.9. Searching for images, templates, and other inputs To search for images, templates, and other inputs for the oc new-app command, add the --search and --list flags. For example, to find all of the images or templates that include PHP: USD oc new-app --search php 3.4.4.10. Setting the import mode To set the import mode when using oc new-app , add the --import-mode flag. This flag can be appended with Legacy or PreserveOriginal , which provides users the option to create image streams using a single sub-manifest, or all manifests, respectively. USD oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=Legacy --name=test USD oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=PreserveOriginal --name=test 3.5. Creating applications using Ruby on Rails Ruby on Rails is a web framework written in Ruby. This guide covers using Rails 4 on Red Hat OpenShift Service on AWS. Warning Go through the whole tutorial to have an overview of all the steps necessary to run your application on the Red Hat OpenShift Service on AWS. If you experience a problem try reading through the entire tutorial and then going back to your issue. It can also be useful to review your steps to ensure that all the steps were run correctly. 3.5.1. Prerequisites Basic Ruby and Rails knowledge. Locally installed version of Ruby 2.0.0+, Rubygems, Bundler. Basic Git knowledge. Running instance of Red Hat OpenShift Service on AWS 4. Make sure that an instance of Red Hat OpenShift Service on AWS is running and is available. Also make sure that your oc CLI client is installed and the command is accessible from your command shell, so you can use it to log in using your email address and password. 3.5.2. Setting up the database Rails applications are almost always used with a database. For local development use the PostgreSQL database. Procedure Install the database: USD sudo yum install -y postgresql postgresql-server postgresql-devel Initialize the database: USD sudo postgresql-setup initdb This command creates the /var/lib/pgsql/data directory, in which the data is stored. Start the database: USD sudo systemctl start postgresql.service When the database is running, create your rails user: USD sudo -u postgres createuser -s rails Note that the user created has no password. 3.5.3. Writing your application If you are starting your Rails application from scratch, you must install the Rails gem first. Then you can proceed with writing your application. Procedure Install the Rails gem: USD gem install rails Example output Successfully installed rails-4.3.0 1 gem installed After you install the Rails gem, create a new application with PostgreSQL as your database: USD rails new rails-app --database=postgresql Change into your new application directory: USD cd rails-app If you already have an application, make sure the pg (postgresql) gem is present in your Gemfile . If not, edit your Gemfile by adding the gem: gem 'pg' Generate a new Gemfile.lock with all your dependencies: USD bundle install In addition to using the postgresql database with the pg gem, you also must ensure that the config/database.yml is using the postgresql adapter. Make sure you updated default section in the config/database.yml file, so it looks like this: default: &default adapter: postgresql encoding: unicode pool: 5 host: localhost username: rails password: <password> Create your application's development and test databases: USD rake db:create This creates development and test database in your PostgreSQL server. 3.5.3.1. Creating a welcome page Since Rails 4 no longer serves a static public/index.html page in production, you must create a new root page. To have a custom welcome page must do following steps: Create a controller with an index action. Create a view page for the welcome controller index action. Create a route that serves applications root page with the created controller and view. Rails offers a generator that completes all necessary steps for you. Procedure Run Rails generator: USD rails generate controller welcome index All the necessary files are created. edit line 2 in config/routes.rb file as follows: Run the rails server to verify the page is available: USD rails server You should see your page by visiting http://localhost:3000 in your browser. If you do not see the page, check the logs that are output to your server to debug. 3.5.3.2. Configuring application for Red Hat OpenShift Service on AWS To have your application communicate with the PostgreSQL database service running in Red Hat OpenShift Service on AWS you must edit the default section in your config/database.yml to use environment variables, which you must define later, upon the database service creation. Procedure Edit the default section in your config/database.yml with pre-defined variables as follows: Sample config/database YAML file <% user = ENV.key?("POSTGRESQL_ADMIN_PASSWORD") ? "root" : ENV["POSTGRESQL_USER"] %> <% password = ENV.key?("POSTGRESQL_ADMIN_PASSWORD") ? ENV["POSTGRESQL_ADMIN_PASSWORD"] : ENV["POSTGRESQL_PASSWORD"] %> <% db_service = ENV.fetch("DATABASE_SERVICE_NAME","").upcase %> default: &default adapter: postgresql encoding: unicode # For details on connection pooling, see rails configuration guide # http://guides.rubyonrails.org/configuring.html#database-pooling pool: <%= ENV["POSTGRESQL_MAX_CONNECTIONS"] || 5 %> username: <%= user %> password: <%= password %> host: <%= ENV["#{db_service}_SERVICE_HOST"] %> port: <%= ENV["#{db_service}_SERVICE_PORT"] %> database: <%= ENV["POSTGRESQL_DATABASE"] %> 3.5.3.3. Storing your application in Git Building an application in Red Hat OpenShift Service on AWS usually requires that the source code be stored in a git repository, so you must install git if you do not already have it. Prerequisites Install git. Procedure Make sure you are in your Rails application directory by running the ls -1 command. The output of the command should look like: USD ls -1 Example output app bin config config.ru db Gemfile Gemfile.lock lib log public Rakefile README.rdoc test tmp vendor Run the following commands in your Rails app directory to initialize and commit your code to git: USD git init USD git add . USD git commit -m "initial commit" After your application is committed you must push it to a remote repository. GitHub account, in which you create a new repository. Set the remote that points to your git repository: USD git remote add origin [email protected]:<namespace/repository-name>.git Push your application to your remote git repository. USD git push 3.5.4. Deploying your application to Red Hat OpenShift Service on AWS You can deploy you application to Red Hat OpenShift Service on AWS. After creating the rails-app project, you are automatically switched to the new project namespace. Deploying your application in Red Hat OpenShift Service on AWS involves three steps: Creating a database service from Red Hat OpenShift Service on AWS's PostgreSQL image. Creating a frontend service from Red Hat OpenShift Service on AWS's Ruby 2.0 builder image and your Ruby on Rails source code, which are wired with the database service. Creating a route for your application. 3.5.4.1. Creating the database service Procedure Your Rails application expects a running database service. For this service use PostgreSQL database image. To create the database service, use the oc new-app command. To this command you must pass some necessary environment variables which are used inside the database container. These environment variables are required to set the username, password, and name of the database. You can change the values of these environment variables to anything you would like. The variables are as follows: POSTGRESQL_DATABASE POSTGRESQL_USER POSTGRESQL_PASSWORD Setting these variables ensures: A database exists with the specified name. A user exists with the specified name. The user can access the specified database with the specified password. Procedure Create the database service: USD oc new-app postgresql -e POSTGRESQL_DATABASE=db_name -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password To also set the password for the database administrator, append to the command with: -e POSTGRESQL_ADMIN_PASSWORD=admin_pw Watch the progress: USD oc get pods --watch 3.5.4.2. Creating the frontend service To bring your application to Red Hat OpenShift Service on AWS, you must specify a repository in which your application lives. Procedure Create the frontend service and specify database related environment variables that were setup when creating the database service: USD oc new-app path/to/source/code --name=rails-app -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=db_name -e DATABASE_SERVICE_NAME=postgresql With this command, Red Hat OpenShift Service on AWS fetches the source code, sets up the builder, builds your application image, and deploys the newly created image together with the specified environment variables. The application is named rails-app . Verify the environment variables have been added by viewing the JSON document of the rails-app deployment config: USD oc get dc rails-app -o json You should see the following section: Example output env": [ { "name": "POSTGRESQL_USER", "value": "username" }, { "name": "POSTGRESQL_PASSWORD", "value": "password" }, { "name": "POSTGRESQL_DATABASE", "value": "db_name" }, { "name": "DATABASE_SERVICE_NAME", "value": "postgresql" } ], Check the build process: USD oc logs -f build/rails-app-1 After the build is complete, look at the running pods in Red Hat OpenShift Service on AWS: USD oc get pods You should see a line starting with myapp-<number>-<hash> , and that is your application running in Red Hat OpenShift Service on AWS. Before your application is functional, you must initialize the database by running the database migration script. There are two ways you can do this: Manually from the running frontend container: Exec into frontend container with rsh command: USD oc rsh <frontend_pod_id> Run the migration from inside the container: USD RAILS_ENV=production bundle exec rake db:migrate If you are running your Rails application in a development or test environment you do not have to specify the RAILS_ENV environment variable. By adding pre-deployment lifecycle hooks in your template. 3.5.4.3. Creating a route for your application You can expose a service to create a route for your application. Warning Ensure the hostname you specify resolves into the IP address of the router.
|
[
"oc create -f <filename>",
"oc create -f <filename> -n <project>",
"kind: \"ImageStream\" apiVersion: \"image.openshift.io/v1\" metadata: name: \"ruby\" creationTimestamp: null spec: tags: - name: \"2.6\" annotations: description: \"Build and run Ruby 2.6 applications\" iconClass: \"icon-ruby\" tags: \"builder,ruby\" 1 supports: \"ruby:2.6,ruby\" version: \"2.6\"",
"oc process -f <filename> -l name=otherLabel",
"oc process --parameters -f <filename>",
"oc process --parameters -n <project> <template_name>",
"oc process --parameters -n openshift rails-postgresql-example",
"NAME DESCRIPTION GENERATOR VALUE SOURCE_REPOSITORY_URL The URL of the repository with your application source code https://github.com/sclorg/rails-ex.git SOURCE_REPOSITORY_REF Set this to a branch name, tag or other ref of your repository if you are not using the default branch CONTEXT_DIR Set this to the relative path to your project if it is not in the root of your repository APPLICATION_DOMAIN The exposed hostname that will route to the Rails service rails-postgresql-example.openshiftapps.com GITHUB_WEBHOOK_SECRET A secret string used to configure the GitHub webhook expression [a-zA-Z0-9]{40} SECRET_KEY_BASE Your secret key for verifying the integrity of signed cookies expression [a-z0-9]{127} APPLICATION_USER The application user that is used within the sample application to authorize access on pages openshift APPLICATION_PASSWORD The application password that is used within the sample application to authorize access on pages secret DATABASE_SERVICE_NAME Database service name postgresql POSTGRESQL_USER database username expression user[A-Z0-9]{3} POSTGRESQL_PASSWORD database password expression [a-zA-Z0-9]{8} POSTGRESQL_DATABASE database name root POSTGRESQL_MAX_CONNECTIONS database max connections 10 POSTGRESQL_SHARED_BUFFERS database shared buffers 12MB",
"oc process -f <filename>",
"oc process <template_name>",
"oc process -f <filename> | oc create -f -",
"oc process <template> | oc create -f -",
"oc process -f my-rails-postgresql -p POSTGRESQL_USER=bob -p POSTGRESQL_DATABASE=mydatabase",
"oc process -f my-rails-postgresql -p POSTGRESQL_USER=bob -p POSTGRESQL_DATABASE=mydatabase | oc create -f -",
"cat postgres.env POSTGRESQL_USER=bob POSTGRESQL_DATABASE=mydatabase",
"oc process -f my-rails-postgresql --param-file=postgres.env",
"sed s/bob/alice/ postgres.env | oc process -f my-rails-postgresql --param-file=-",
"oc edit template <template>",
"apiVersion: template.openshift.io/v1 kind: Template metadata: name: redis-template annotations: description: \"Description\" iconClass: \"icon-redis\" tags: \"database,nosql\" objects: - apiVersion: v1 kind: Pod metadata: name: redis-master spec: containers: - env: - name: REDIS_PASSWORD value: USD{REDIS_PASSWORD} image: dockerfile/redis name: master ports: - containerPort: 6379 protocol: TCP parameters: - description: Password used for Redis authentication from: '[A-Z0-9]{8}' generate: expression name: REDIS_PASSWORD labels: redis: master",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: cakephp-mysql-example 1 annotations: openshift.io/display-name: \"CakePHP MySQL Example (Ephemeral)\" 2 description: >- An example CakePHP application with a MySQL database. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/cakephp-ex/blob/master/README.md. WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing.\" 3 openshift.io/long-description: >- This template defines resources needed to develop a CakePHP application, including a build configuration, application DeploymentConfig, and database DeploymentConfig. The database is stored in non-persistent storage, so this configuration should be used for experimental purposes only. 4 tags: \"quickstart,php,cakephp\" 5 iconClass: icon-php 6 openshift.io/provider-display-name: \"Red Hat, Inc.\" 7 openshift.io/documentation-url: \"https://github.com/sclorg/cakephp-ex\" 8 openshift.io/support-url: \"https://access.redhat.com\" 9 message: \"Your admin credentials are USD{ADMIN_USERNAME}:USD{ADMIN_PASSWORD}\" 10",
"kind: \"Template\" apiVersion: \"v1\" labels: template: \"cakephp-mysql-example\" 1 app: \"USD{NAME}\" 2",
"parameters: - name: USERNAME description: \"The user name for Joe\" value: joe",
"parameters: - name: PASSWORD description: \"The random user password\" generate: expression from: \"[a-zA-Z0-9]{12}\"",
"parameters: - name: singlequoted_example generate: expression from: '[\\A]{10}' - name: doublequoted_example generate: expression from: \"[\\\\A]{10}\"",
"{ \"parameters\": [ { \"name\": \"json_example\", \"generate\": \"expression\", \"from\": \"[\\\\A]{10}\" } ] }",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: cakephp-mysql-example annotations: description: Defines how to build the application spec: source: type: Git git: uri: \"USD{SOURCE_REPOSITORY_URL}\" 1 ref: \"USD{SOURCE_REPOSITORY_REF}\" contextDir: \"USD{CONTEXT_DIR}\" - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: replicas: \"USD{{REPLICA_COUNT}}\" 2 parameters: - name: SOURCE_REPOSITORY_URL 3 displayName: Source Repository URL 4 description: The URL of the repository with your application source code 5 value: https://github.com/sclorg/cakephp-ex.git 6 required: true 7 - name: GITHUB_WEBHOOK_SECRET description: A secret string used to configure the GitHub webhook generate: expression 8 from: \"[a-zA-Z0-9]{40}\" 9 - name: REPLICA_COUNT description: Number of replicas to run value: \"2\" required: true message: \"... The GitHub webhook secret is USD{GITHUB_WEBHOOK_SECRET} ...\" 10",
"kind: \"Template\" apiVersion: \"v1\" metadata: name: my-template objects: - kind: \"Service\" 1 apiVersion: \"v1\" metadata: name: \"cakephp-mysql-example\" annotations: description: \"Exposes and load balances the application pods\" spec: ports: - name: \"web\" port: 8080 targetPort: 8080 selector: name: \"cakephp-mysql-example\"",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: ConfigMap apiVersion: v1 metadata: name: my-template-config annotations: template.openshift.io/expose-username: \"{.data['my\\\\.username']}\" data: my.username: foo - kind: Secret apiVersion: v1 metadata: name: my-template-config-secret annotations: template.openshift.io/base64-expose-password: \"{.data['password']}\" stringData: password: <password> - kind: Service apiVersion: v1 metadata: name: my-template-service annotations: template.openshift.io/expose-service_ip_port: \"{.spec.clusterIP}:{.spec.ports[?(.name==\\\"web\\\")].port}\" spec: ports: - name: \"web\" port: 8080 - kind: Route apiVersion: route.openshift.io/v1 metadata: name: my-template-route annotations: template.openshift.io/expose-uri: \"http://{.spec.host}{.spec.path}\" spec: path: mypath",
"{ \"credentials\": { \"username\": \"foo\", \"password\": \"YmFy\", \"service_ip_port\": \"172.30.12.34:8080\", \"uri\": \"http://route-test.router.default.svc.cluster.local/mypath\" } }",
"\"template.alpha.openshift.io/wait-for-ready\": \"true\"",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: annotations: # wait-for-ready used on BuildConfig ensures that template instantiation # will fail immediately if build fails template.alpha.openshift.io/wait-for-ready: \"true\" spec: - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: annotations: template.alpha.openshift.io/wait-for-ready: \"true\" spec: - kind: Service apiVersion: v1 metadata: name: spec:",
"oc get -o yaml all > <yaml_filename>",
"oc get csv",
"oc policy add-role-to-user edit <user> -n <target_project>",
"oc new-app /<path to source code>",
"oc new-app https://github.com/sclorg/cakephp-ex",
"oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret",
"oc new-app https://github.com/sclorg/s2i-ruby-container.git --context-dir=2.0/test/puma-test-app",
"oc new-app https://github.com/openshift/ruby-hello-world.git#beta4",
"oc new-app /home/user/code/myapp --strategy=docker",
"oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git",
"oc new-app openshift/ruby-20-centos7:latest~/home/user/code/my-ruby-app",
"oc new-app mysql",
"oc new-app myregistry:5000/example/myimage",
"oc new-app my-stream:v1",
"oc create -f examples/sample-app/application-template-stibuild.json",
"oc new-app ruby-helloworld-sample",
"oc new-app -f examples/sample-app/application-template-stibuild.json",
"oc new-app ruby-helloworld-sample -p ADMIN_USERNAME=admin -p ADMIN_PASSWORD=mypassword",
"ADMIN_USERNAME=admin ADMIN_PASSWORD=mypassword",
"oc new-app ruby-helloworld-sample --param-file=helloworld.params",
"oc new-app openshift/postgresql-92-centos7 -e POSTGRESQL_USER=user -e POSTGRESQL_DATABASE=db -e POSTGRESQL_PASSWORD=password",
"POSTGRESQL_USER=user POSTGRESQL_DATABASE=db POSTGRESQL_PASSWORD=password",
"oc new-app openshift/postgresql-92-centos7 --env-file=postgresql.env",
"cat postgresql.env | oc new-app openshift/postgresql-92-centos7 --env-file=-",
"oc new-app openshift/ruby-23-centos7 --build-env HTTP_PROXY=http://myproxy.net:1337/ --build-env GEM_HOME=~/.gem",
"HTTP_PROXY=http://myproxy.net:1337/ GEM_HOME=~/.gem",
"oc new-app openshift/ruby-23-centos7 --build-env-file=ruby.env",
"cat ruby.env | oc new-app openshift/ruby-23-centos7 --build-env-file=-",
"oc new-app https://github.com/openshift/ruby-hello-world -l name=hello-world",
"oc new-app https://github.com/openshift/ruby-hello-world -o yaml > myapp.yaml",
"vi myapp.yaml",
"oc create -f myapp.yaml",
"oc new-app https://github.com/openshift/ruby-hello-world --name=myapp",
"oc new-app https://github.com/openshift/ruby-hello-world -n myproject",
"oc new-app https://github.com/openshift/ruby-hello-world mysql",
"oc new-app ruby+mysql",
"oc new-app ruby~https://github.com/openshift/ruby-hello-world mysql --group=ruby+mysql",
"oc new-app --search php",
"oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=Legacy --name=test",
"oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=PreserveOriginal --name=test",
"sudo yum install -y postgresql postgresql-server postgresql-devel",
"sudo postgresql-setup initdb",
"sudo systemctl start postgresql.service",
"sudo -u postgres createuser -s rails",
"gem install rails",
"Successfully installed rails-4.3.0 1 gem installed",
"rails new rails-app --database=postgresql",
"cd rails-app",
"gem 'pg'",
"bundle install",
"default: &default adapter: postgresql encoding: unicode pool: 5 host: localhost username: rails password: <password>",
"rake db:create",
"rails generate controller welcome index",
"root 'welcome#index'",
"rails server",
"<% user = ENV.key?(\"POSTGRESQL_ADMIN_PASSWORD\") ? \"root\" : ENV[\"POSTGRESQL_USER\"] %> <% password = ENV.key?(\"POSTGRESQL_ADMIN_PASSWORD\") ? ENV[\"POSTGRESQL_ADMIN_PASSWORD\"] : ENV[\"POSTGRESQL_PASSWORD\"] %> <% db_service = ENV.fetch(\"DATABASE_SERVICE_NAME\",\"\").upcase %> default: &default adapter: postgresql encoding: unicode # For details on connection pooling, see rails configuration guide # http://guides.rubyonrails.org/configuring.html#database-pooling pool: <%= ENV[\"POSTGRESQL_MAX_CONNECTIONS\"] || 5 %> username: <%= user %> password: <%= password %> host: <%= ENV[\"#{db_service}_SERVICE_HOST\"] %> port: <%= ENV[\"#{db_service}_SERVICE_PORT\"] %> database: <%= ENV[\"POSTGRESQL_DATABASE\"] %>",
"ls -1",
"app bin config config.ru db Gemfile Gemfile.lock lib log public Rakefile README.rdoc test tmp vendor",
"git init",
"git add .",
"git commit -m \"initial commit\"",
"git remote add origin [email protected]:<namespace/repository-name>.git",
"git push",
"oc new-app postgresql -e POSTGRESQL_DATABASE=db_name -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password",
"-e POSTGRESQL_ADMIN_PASSWORD=admin_pw",
"oc get pods --watch",
"oc new-app path/to/source/code --name=rails-app -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=db_name -e DATABASE_SERVICE_NAME=postgresql",
"oc get dc rails-app -o json",
"env\": [ { \"name\": \"POSTGRESQL_USER\", \"value\": \"username\" }, { \"name\": \"POSTGRESQL_PASSWORD\", \"value\": \"password\" }, { \"name\": \"POSTGRESQL_DATABASE\", \"value\": \"db_name\" }, { \"name\": \"DATABASE_SERVICE_NAME\", \"value\": \"postgresql\" } ],",
"oc logs -f build/rails-app-1",
"oc get pods",
"oc rsh <frontend_pod_id>",
"RAILS_ENV=production bundle exec rake db:migrate"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/building_applications/creating-applications
|
Chapter 2. Understanding authentication
|
Chapter 2. Understanding authentication For users to interact with Red Hat OpenShift Service on AWS, they must first authenticate to the cluster. The authentication layer identifies the user associated with requests to the Red Hat OpenShift Service on AWS API. The authorization layer then uses information about the requesting user to determine if the request is allowed. 2.1. Users A user in Red Hat OpenShift Service on AWS is an entity that can make requests to the Red Hat OpenShift Service on AWS API. An Red Hat OpenShift Service on AWS User object represents an actor which can be granted permissions in the system by adding roles to them or to their groups. Typically, this represents the account of a developer or administrator that is interacting with Red Hat OpenShift Service on AWS. Several types of users can exist: User type Description Regular users This is the way most interactive Red Hat OpenShift Service on AWS users are represented. Regular users are created automatically in the system upon first login or can be created via the API. Regular users are represented with the User object. Examples: joe alice System users Many of these are created automatically when the infrastructure is defined, mainly for the purpose of enabling the infrastructure to interact with the API securely. They include a cluster administrator (with access to everything), a per-node user, users for use by routers and registries, and various others. Finally, there is an anonymous system user that is used by default for unauthenticated requests. Examples: system:admin system:openshift-registry system:node:node1.example.com Service accounts These are special system users associated with projects; some are created automatically when the project is first created, while project administrators can create more for the purpose of defining access to the contents of each project. Service accounts are represented with the ServiceAccount object. Examples: system:serviceaccount:default:deployer system:serviceaccount:foo:builder Each user must authenticate in some way to access Red Hat OpenShift Service on AWS. API requests with no authentication or invalid authentication are authenticated as requests by the anonymous system user. After authentication, policy determines what the user is authorized to do. 2.2. Groups A user can be assigned to one or more groups , each of which represent a certain set of users. Groups are useful when managing authorization policies to grant permissions to multiple users at once, for example allowing access to objects within a project, versus granting them to users individually. In addition to explicitly defined groups, there are also system groups, or virtual groups , that are automatically provisioned by the cluster. The following default virtual groups are most important: Virtual group Description system:authenticated Automatically associated with all authenticated users. system:authenticated:oauth Automatically associated with all users authenticated with an OAuth access token. system:unauthenticated Automatically associated with all unauthenticated users. 2.3. API authentication Requests to the Red Hat OpenShift Service on AWS API are authenticated using the following methods: OAuth access tokens Obtained from the Red Hat OpenShift Service on AWS OAuth server using the <namespace_route> /oauth/authorize and <namespace_route> /oauth/token endpoints. Sent as an Authorization: Bearer... header. Sent as a websocket subprotocol header in the form base64url.bearer.authorization.k8s.io.<base64url-encoded-token> for websocket requests. X.509 client certificates Requires an HTTPS connection to the API server. Verified by the API server against a trusted certificate authority bundle. The API server creates and distributes certificates to controllers to authenticate themselves. Any request with an invalid access token or an invalid certificate is rejected by the authentication layer with a 401 error. If no access token or certificate is presented, the authentication layer assigns the system:anonymous virtual user and the system:unauthenticated virtual group to the request. This allows the authorization layer to determine which requests, if any, an anonymous user is allowed to make. 2.3.1. Red Hat OpenShift Service on AWS OAuth server The Red Hat OpenShift Service on AWS master includes a built-in OAuth server. Users obtain OAuth access tokens to authenticate themselves to the API. When a person requests a new OAuth token, the OAuth server uses the configured identity provider to determine the identity of the person making the request. It then determines what user that identity maps to, creates an access token for that user, and returns the token for use. 2.3.1.1. OAuth token requests Every request for an OAuth token must specify the OAuth client that will receive and use the token. The following OAuth clients are automatically created when starting the Red Hat OpenShift Service on AWS API: OAuth client Usage openshift-browser-client Requests tokens at <namespace_route>/oauth/token/request with a user-agent that can handle interactive logins. [1] openshift-challenging-client Requests tokens with a user-agent that can handle WWW-Authenticate challenges. <namespace_route> refers to the namespace route. This is found by running the following command: USD oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host All requests for OAuth tokens involve a request to <namespace_route>/oauth/authorize . Most authentication integrations place an authenticating proxy in front of this endpoint, or configure Red Hat OpenShift Service on AWS to validate credentials against a backing identity provider. Requests to <namespace_route>/oauth/authorize can come from user-agents that cannot display interactive login pages, such as the CLI. Therefore, Red Hat OpenShift Service on AWS supports authenticating using a WWW-Authenticate challenge in addition to interactive login flows. If an authenticating proxy is placed in front of the <namespace_route>/oauth/authorize endpoint, it sends unauthenticated, non-browser user-agents WWW-Authenticate challenges rather than displaying an interactive login page or redirecting to an interactive login flow. Note To prevent cross-site request forgery (CSRF) attacks against browser clients, only send Basic authentication challenges with if a X-CSRF-Token header is on the request. Clients that expect to receive Basic WWW-Authenticate challenges must set this header to a non-empty value. If the authenticating proxy cannot support WWW-Authenticate challenges, or if Red Hat OpenShift Service on AWS is configured to use an identity provider that does not support WWW-Authenticate challenges, you must use a browser to manually obtain a token from <namespace_route>/oauth/token/request .
|
[
"oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/authentication_and_authorization/understanding-authentication
|
A.11. numastat
|
A.11. numastat The numastat tool is provided by the numactl package, and displays memory statistics (such as allocation hits and misses) for processes and the operating system on a per-NUMA-node basis. The default tracking categories for the numastat command are outlined as follows: numa_hit The number of pages that were successfully allocated to this node. numa_miss The number of pages that were allocated on this node because of low memory on the intended node. Each numa_miss event has a corresponding numa_foreign event on another node. numa_foreign The number of pages initially intended for this node that were allocated to another node instead. Each numa_foreign event has a corresponding numa_miss event on another node. interleave_hit The number of interleave policy pages successfully allocated to this node. local_node The number of pages successfully allocated on this node, by a process on this node. other_node The number of pages allocated on this node, by a process on another node. Supplying any of the following options changes the displayed units to megabytes of memory (rounded to two decimal places), and changes other specific numastat behaviors as described below. -c Horizontally condenses the displayed table of information. This is useful on systems with a large number of NUMA nodes, but column width and inter-column spacing are somewhat unpredictable. When this option is used, the amount of memory is rounded to the nearest megabyte. -m Displays system-wide memory usage information on a per-node basis, similar to the information found in /proc/meminfo . -n Displays the same information as the original numastat command ( numa_hit , numa_miss , numa_foreign , interleave_hit , local_node , and other_node ), with an updated format, using megabytes as the unit of measurement. -p pattern Displays per-node memory information for the specified pattern. If the value for pattern is comprised of digits, numastat assumes that it is a numerical process identifier. Otherwise, numastat searches process command lines for the specified pattern. Command line arguments entered after the value of the -p option are assumed to be additional patterns for which to filter. Additional patterns expand, rather than narrow, the filter. -s Sorts the displayed data in descending order so that the biggest memory consumers (according to the total column) are listed first. Optionally, you can specify a node, and the table will be sorted according to the node column. When using this option, the node value must follow the -s option immediately, as shown here: Do not include white space between the option and its value. -v Displays more verbose information. Namely, process information for multiple processes will display detailed information for each process. -V Displays numastat version information. -z Omits table rows and columns with only zero values from the displayed information. Note that some near-zero values that are rounded to zero for display purposes will not be omitted from the displayed output.
|
[
"numastat -s2"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-tool_reference-numastat
|
Preface
|
Preface Red Hat Enterprise Linux minor releases are an aggregation of individual enhancement, security, and bug fix errata. The Red Hat Enterprise Linux 7.1 Release Notes document the major changes, features, and enhancements introduced in the Red Hat Enterprise Linux 7 operating system and its accompanying applications for this minor release. In addition, the Red Hat Enterprise Linux 7.1 Release Notes document the known issues in Red Hat Enterprise Linux 7.1. For information regarding the Red Hat Enterprise Linux life cycle, refer to https://access.redhat.com/support/policy/updates/errata/ .
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.1_release_notes/pref-red_hat_enterprise_linux-7.1_release_notes-preface
|
About
|
About OpenShift Container Platform 4.14 Introduction to OpenShift Container Platform Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/about/index
|
Chapter 1. Introduction
|
Chapter 1. Introduction Directory Server is based on an open-systems server protocol called the Lightweight Directory Access Protocol (LDAP). The Directory Server is a robust, scalable server designed to manage large scale directories to support an enterprise-wide directory of users and resources, extranets, and e-commerce applications over the Internet. The Directory Server runs as the ns-slapd process or service on the machine. The server manages the directory databases and responds to client requests. Most Directory Server administrative tasks can be performed through the Directory Server Console, the graphical user interface provided with the Directory Server. For information on the use of the Directory Server Console, see the Red Hat Directory Server Administration Guide . This reference deals with the other methods of managing the Directory Server by altering the server configuration attributes using the command line and using command-line utilities and scripts. 1.1. Directory Server Configuration The format and method for storing configuration information for Directory Server and a listing for all server attributes are found in two chapters, Chapter 3, Core Server Configuration Reference and Chapter 4, Plug-in Implemented Server Functionality Reference . 1.2. Directory Server Instance File Reference Section 2.1, "Directory Server Instance-independent Files and Directories" has an overview of the files and configuration information stored in each instance of Directory Server. This is useful reference to helps administrators understand the changes or absence of changes in the course of directory activity. From a security standpoint, this also helps users detect errors and intrusion by highlighting normal changes and abnormal behavior. 1.3. Using Directory Server Command-Line Utilities Directory Server comes with a set of configurable command-line utilities that can search and modify entries in the directory and administer the server. Chapter 9, Command-Line Utilities describes these command-line utilities and contains information on where the utilities are stored and how to access them.
| null |
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/configuration_command_and_file_reference/introduction
|
15.3. The Virtual Hardware Details Window
|
15.3. The Virtual Hardware Details Window The virtual hardware details window displays information about the virtual hardware configured for the guest. Virtual hardware resources can be added, removed and modified in this window. To access the virtual hardware details window, click on the icon in the toolbar. Figure 15.3. The virtual hardware details icon Clicking the icon displays the virtual hardware details window. Figure 15.4. The virtual hardware details window 15.3.1. Attaching USB Devices to a Guest Virtual Machine Note In order to attach the USB device to the guest virtual machine, you first must attach it to the host physical machine and confirm that the device is working. If the guest is running, you need to shut it down before proceeding. Procedure 15.1. Attaching USB Devices using Virt-Manager Open the guest virtual machine's Virtual Machine Details screen. Click Add Hardware Figure 15.5. Add Hardware Button In the Add New Virtual Hardware popup, select USB Host Device , select the device you want to attach from the list and Click Finish . Figure 15.6. Add USB Device To use the USB device in the guest virtual machine, start the guest virtual machine.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-Virtualization-Managing_guests_with_the_Virtual_Machine_Manager_virt_manager-The_Virtual_Machine_Manager_details_window_
|
Understanding OpenShift GitOps
|
Understanding OpenShift GitOps Red Hat OpenShift GitOps 1.15 Introduction to OpenShift GitOps Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.15/html/understanding_openshift_gitops/index
|
6.10. Red Hat Virtualization 4.4 Batch Update 4 (ovirt-4.4.5)
|
6.10. Red Hat Virtualization 4.4 Batch Update 4 (ovirt-4.4.5) 6.10.1. Bug Fix These bugs were fixed in this release of Red Hat Virtualization: BZ# 1145658 This release allows the proper removal of a storage domain containing memory dumps, either by moving the memory dumps to another storage domain or deleting the memory dumps from the snapshot. BZ# 1815589 Previously, following a successful migration on the Self-hosted Engine, he HA agent on the source host immediately moved to the state EngineDown, and shorly thereafter tried to start the engine locally, if the destination host didn't update the shared storage quickly enough, marking the Manager virtual machine as being up. As a result, starting the virtual machine failed due to a shared lock held by the destination host. This also resulted in generating false alarms and notifications. In this release, the HA agent first moves to the state EngineMaybeAway, providing the destination host more time to update the shared storage with the updated state. As a result, no notifications or false alarms are generated. Note: in scenarios where the virtual machine needs to be started on the source host, this fix slightly increases the time it takes the Manager virtual machine on the source host to start. BZ# 1860492 Previously, if the Seal option was used when creating a template for Linux virtual machines, the original host name was not removed from the template. In this release, the host name is set to localhost or the new virtual machine host name. BZ# 1895217 Previously, after a host that virtual machines were pinned to was removed, the Manager failed to start. As a result,the setup of the self-hosted engine failed. In this release, when a host is removed, virtual machines no longer remain pinned to that host and the Manager can start successfully. BZ# 1905108 Previously, plugging several virtual disks to a running virtual machine over a short time interval could cause a failure to plug some of the disks, and issued an error message: "Domain already contains a disk with that address". In this release, this is avoided by making sure that a disk that is being plugged to a running virtual machine is not assigned with an address that has already been assigned to another disk that was previously plugged to the virtual machine. BZ# 1916032 Previously, if a host in the Self-hosted Engine had an ID number higher than 64, other hosts did not recognize that host, and the host did not appear in 'hosted-engine --vm-status'. In this release, the Self-hosted Engine allows host ID numbers of up to 2000. BZ# 1916519 Previously, the used memory of the host didn't take the SReclaimable memory into consideration while it did for free memory. As a result, there were discrepancies in the host statistics. In this release, the SReclaimable memory is a part of the used memory calculation. BZ# 1921119 Previously, a cluster page indicated an out-of-sync cluster when in fact all networks were in sync. This was due to a logical error in the code when a host QoS was assigned to two networks on same host. In this release, the cluster page does not show out-of-sync for this setup. BZ# 1931786 Previously, the Red Hat Virtualization Manager missed the SkuToAVLevel configuration for 4.5 clusters. In this release, the SkuToAVLevel is available for these clusters and allows Windows updates to update Red Hat related drivers for the guest host. BZ# 1940672 Previously, when Red Hat Virtualization Manager 4.4.3+ upgraded a host in a cluster that is set with Skylake/Cascadelake CPU type and compatibility level 4.4 (or lower), the host could become non-operational. In this release, the Red Hat Virtualization Manager blocks the upgrade of a host when the cluster is set with a secured Skylake/Cascadelake CPU type 1 (Secure Intel Skylake Client Family, Secure Intel Skylake Server Family, or Secure Intel Cascadelake Server Family) where the upgrade is likely to make the host non-operational. If the cluster is set with an insecure Skylake/Cascadelake CPU type 2 (Intel Skylake Client Family, Intel Skylake Server Family, or Intel Cascadelake Server Family) the user is notified with a recommendation to change the cluster to a secure Skylake/Cascadelake CPU type, but is allowed to proceed with the host upgrade. In order to make the upgraded host operational, the user must enable TSX at the operating system level. 6.10.2. Enhancements This release of Red Hat Virtualization features the following enhancements: BZ# 1080725 Setting static IPv6 addresses on hosts is now supported. BZ# 1155275 With this update, you can synchronize a LUN's disk size on all hosts that are connected to the LUN's disk, and update its size on all running virtual machines to which it is attached. To refresh a LUN's disk size: 1. In the Administration portal, go to Compute>Virtual Machines and select a virtual machine. 2. In the Disks tab, click Refresh LUN. For connected virtual machines that are not running, update the disk on the virtual machines once they are running. BZ# 1431792 This feature allows adding emulated TPM (Trusted Platform Module) devices to Virtual Machines. TPM devices are useful in cryptographic operations (generating cryptographic keys, random numbers, hashes, etc.) or for storing data that can be used to verify software configurations securely. QEMU and libvirt implement support for emulated TPM 2.0 devices, which is what Red Hat Virtualization uses to add TPM devices to Virtual Machines. Once an emulated TPM device is added to the Virtual Machine, it can be used as a normal TPM 2.0 device in the guest OS. BZ# 1688186 Previously, the CPU and NUMA pinning were done manually or automatically only by using the REST API when adding a new virtual machine. With this update, you can update the CPU and NUMA pinning using the Administration portal and when updating a virtual machine. BZ# 1755156 In this release, it is now possible to enter a path to the OVA archive for local appliance installation using the cockpit-ovirt UI. BZ# 1836661 Previously the logical names for disks without a mounted filesystem were not displayed in the Red Hat Virtualization Manager. In this release, logical names for such disks are properly reported provided the version of QEMU Guest Agent in the virtual machine is 5.2 or higher. BZ# 1837221 Previously, the Manager was able to connect to hypervisors only using RSA public keys for SSH connection. With this update, the Manager can also use EcDSA and EdDSA public keys for SSH. Previously, RHV used only the fingerprint of an SSH public key to verify the host. Now that RHV can use EcDSA and EdDSA public keys for SSH, the whole public SSH key must be stored in the RHV database. As a result, using the fingerprint of an SSH public key is deprecated. When adding a new host to the Manager, the Manager will always use the strongest public key that the host offers, unless an administrator provides another specific public key to use. For existing hosts, the Manager stores the entire RSA public key in its database on the SSH connection. For example, if an administrator moves the host to maintenance mode and executes an enroll certificate or reinstalls the host, to use a different public key for the host, the administrator can provide a custom public key using the REST API or by fetching the strongest public key in the Edit host dialog in the Administration Portal. BZ# 1884233 The authz name is now used as the user domain on the RHVM (Red hat Virtualization Manager) home page. It replaces the profile name. Additionally, several log statements related to authorization/authentication flow have been made consistent by presenting both the user authz name and the profile name where applicable. In this release, <username>@<authz name> is displayed on the home page once the user is successfully logged in to the RHVM. In addition, the log statements now contain both the authz name and the profile name as well as the username. BZ# 1899583 With this update, live updating of vNIC filter parameters is possible. When adding\deleting\editing the filter parameters of a virtual machine's vNIC in the Manager, the changes are applied immediately on the device on the virtual machine. BZ# 1910302 Previously, the storage pool manager (SPM) failed to switch to another host if the SPM had uncleared tasks. With this enhancement, a new UI menu has been added to enable cleaning up finished tasks. BZ# 1922200 Previously, records in the event_notification_hist table were erased only during regular cleanup of the audit_log table By default audit_log table records that are older than 30 days are removed. With this update, records in the event_notification_hist table are kept for 7 days. You can override this limit by creating a custom configuration file /etc/ovirt-engine/notifier/notifier.conf.d/history.conf with the following content: DAYS_TO_KEEP_HISTORY=<number_of_days> Where <number_of_days> is the number of days to keep records in the event_notification_hist table. After adding this file the first time or after changing this value, you need to restart the ovirt-engine-notifier service: BZ# 1927851 The timezone AUS Eastern Standard Time has been added to cover daylight saving time in Canberra, Melbourne and Sydney. 6.10.3. Technology Preview The items listed in this section are provided as Technology Previews. For further information on the scope of Technology Preview status, and the associated support implications, refer to Technology Preview Features Support Scope . BZ# 1919805 With this update, support for the Bochs display video card emulator has been added for UEFI guest machines. This feature is now the default for a guest UEFI server that uses cluster-level 4.6 or above, where BOCHS is the default value of Video Type. 6.10.4. Release Notes This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment. BZ# 1917409 Red Hat Virtualization (RHV) 4.4.5+ includes Ansible within its own channels. Therefore, the ansible-2.9-for-rhel-8-x86_64-rpms channel does not need to be enabled on either the RHV Manager or RHEL-H hosts. Customers upgrading from RHV releases 4.4.0 through 4.4.4 or 4.3.z, should remove that channel from their RHV Manager and RHEL-H hosts. BZ# 1921104 Ansible-2.9.17 is required for proper setup and functioning of Red Hat Virtualization Manager 4.4.5. BZ# 1921108 ovirt-hosted-engine-setup now requires Ansible-2.9.17. 6.10.5. Known Issues These known issues exist in Red Hat Virtualization at this time: BZ# 1923169 Limiting package subscriptions to the Ansible 2.9 channel is not required for Red Hat Virtualization 4.4.5 installation. Workaround: Remove the Ansible 2.9 channel subscription on Red Hat Virtualization Manager and Red Hat Virtualization hosts when upgrading from Red Hat Virtualization version 4.4.4 or lower.
|
[
"systemctl restart ovirt-engine-notifier"
] |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/release_notes/red_hat_virtualization_4_4_batch_update_4_ovirt_4_4_5
|
Chapter 8. Configuring OVS TC-flower hardware offload
|
Chapter 8. Configuring OVS TC-flower hardware offload In your Red Hat OpenStack Platform (RHOSP) network functions virtualization (NFV) deployment, you can achieve higher performance with Open vSwitch (OVS) TC-flower hardware offload. Hardware offloading diverts networking tasks from the CPU to a dedicated processor on a network interface controller (NIC). These specialized hardware resources provide additional computing power that frees the CPU to perform more valuable computational tasks. Configuring RHOSP for OVS hardware offload is similar to configuring RHOSP for SR-IOV. Important This section includes examples that you must modify for your topology and functional requirements. For more information, see Hardware requirements for NFV . Prerequisites A RHOSP undercloud. You must install and configure the undercloud before you can deploy the overcloud. For more information, see Installing and managing Red Hat OpenStack Platform with director . Note RHOSP director modifies OVS hardware offload configuration files through the key-value pairs that you specify in director templates and custom environment files. You must not modify the OVS hardware offload configuration files directly. Access to the undercloud host and credentials for the stack user. Ensure that the NICs, their applications, the VF guest, and OVS reside on the same NUMA Compute node. Doing so helps to prevent performance degradation from cross-NUMA operations. Access to sudo on the hosts that contain NICs. Ensure that you keep the NIC firmware updated. Yum or dnf updates might not complete the firmware update. For more information, see your vendor documentation. Enable security groups and port security on switchdev ports for the connection tracking (conntrack) module to offload OpenFlow flows to hardware. Procedure Use RHOSP director to install and configure RHOSP in an OVS hardware offload environment. The high-level steps are: Create a network configuration file, network_data.yaml , to configure the physical network for your overcloud, by following the instructions in Configuring overcloud networking in Installing and managing Red Hat OpenStack Platform with director . Generate roles and image files . Configure PCI passthrough devices for OVS hardware offload . Add role-specific parameters and other configuration overrides . Create a bare metal nodes definition file . Create a NIC configuration template for OVS hardware offload . Provision overcloud networks and VIPs. For more information, see: Configuring and provisioning overcloud network definitions in the Installing and managing Red Hat OpenStack Platform with director guide. Configuring and provisioning network VIPs for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. Provision overcloud bare metal nodes. For more information, see Provisioning bare metal nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. Deploy an OVS hardware offload overcloud . Additional resources Section 8.7, "Creating host aggregates in an SR-IOV or an OVS TC-flower hardware offload environment" Section 8.8, "Creating an instance in an SR-IOV or an OVS TC-flower hardware offload environment" Section 8.9, "Troubleshooting OVS TC-flower hardware offload" Section 8.10, "Debugging TC-flower hardware offload flow" 8.1. Generating roles and image files for OVS TC-flower hardware offload Red Hat OpenStack Platform (RHOSP) director uses roles to assign services to nodes. When configuring RHOSP in an OVS TC-flower hardware offload environment, you create a new role that is based on the default role, Compute , that is provided with your RHOSP installation. The undercloud installation requires an environment file to determine where to obtain container images and how to store them. Prerequisites Access to the undercloud host and credentials for the stack user. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Generate an overcloud role for OVS hardware offload that is based on the Compute role: Example In this example, a role is created, ComputeOvsHwOffload, based on the Compute role. The roles file that the command generates is named, roles_data_compute_ovshwol.yaml : Note If your RHOSP environment includes a mix of OVS-DPDK, SR-IOV, and OVS TC-flower hardware offload technologies, you generate just one roles data file, such as roles_data.yaml to include all the roles: (Optional) change the HostnameFormatDefault: '%stackname%-compute-%index%' name for the ComputeOvsHwOffload role. To generate an images file, you run the openstack tripleo container image prepare command. The following inputs are needed: The roles data file that you generated in an earlier step, for example, roles_data_compute_ovshwol.yaml . The SR-IOV environment file appropriate for your Networking service mechanism driver: ML2/OVN environments /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-sriov.yaml ML2/OVS environments /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-sriov.yaml Example In this example, the overcloud_images.yaml file is being generated for an ML2/OVN environment: Note the path and file name of the roles data file and the images file that you have created. You use these files later when you deploy your overcloud. steps Proceed to Section 8.2, "Configuring PCI passthrough devices for OVS TC-flower hardware offload" . Additional resources For more information, see Composable services and custom roles in Installing and managing Red Hat OpenStack Platform with director . Preparing container images in Installing and managing Red Hat OpenStack Platform with director . 8.2. Configuring PCI passthrough devices for OVS TC-flower hardware offload When deploying Red Hat OpenStack Platform for an OVS TC-flower hardware offload environment, you must configure the PCI passthrough devices for the compute nodes in a custom environment file. Prerequisites Access to the one or more physical servers that contain the PCI cards. Access to the undercloud host and credentials for the stack user. Procedure Use one of the following commands on the physical server that contains the PCI cards: If your overcloud is deployed: Sample output If your overcloud has not been deployed: Note the vendor and product IDs for PCI passthrough devices on the ComputeOvsHwOffload nodes. You will need these IDs in a later step. Log in to the undercloud as the stack user. Source the stackrc file: Create a custom environment YAML file, for example, ovshwol-overrides.yaml . Configure the PCI passthrough devices for the compute nodes by adding the following content to the file: Note If you are using Mellanox smart NICs, add DerivePciWhitelistEnabled: true under the ComputeOvsHwOffloadParameters parameter. When using OVS hardware offload, the Compute service (nova) scheduler operates similarly to SR-IOV passthrough for instance spawning. Replace <vendor_id> with the vendor ID of the PCI device. Replace <product_id> with the product ID of the PCI device. Replace <NIC_address> with the address of the PCI device. Replace <physical_network> with the name of the physical network the PCI device is located on. For VLAN, set the physical_network parameter to the name of the network you create in neutron after deployment. This value should also be in NeutronBridgeMappings . For VXLAN, set the physical_network parameter to null . Note Do not use the devname parameter when you configure PCI passthrough because the device name of a NIC can change. To create a Networking service (neutron) port on a PF, specify the vendor_id , the product_id , and the PCI device address in NovaPCIPassthrough , and create the port with the --vnic-type direct-physical option. To create a Networking service port on a virtual function (VF), specify the vendor_id and product_id in NovaPCIPassthrough , and create the port with the --vnic-type direct option. The values of the vendor_id and product_id parameters might be different between physical function (PF) and VF contexts. In the custom environment file, ensure that PciPassthroughFilter and NUMATopologyFilter are in the list of filters for the NovaSchedulerEnabledFilters parameter. The Compute service (nova) uses this parameter to filter a node: Note Optional: For details on how to troubleshoot and configure OVS Hardware Offload issues in RHOSP 17.1 with Mellanox ConnectX5 NICs, see Troubleshooting Hardware Offload . Note the path and file name of the custom environment file that you have created. You use this file later when you deploy your overcloud. steps Proceed to Section 8.3, "Adding role-specific parameters and configuration overrides for OVS TC-flower hardware offload" . Additional resources Guidelines for configuring NovaPCIPassthrough in Configuring the Compute service for instance creation 8.3. Adding role-specific parameters and configuration overrides for OVS TC-flower hardware offload You can add role-specific parameters for the ComputeOvsHwOffload nodes and override default configuration values in a custom environment YAML file that Red Hat OpenStack Platform (RHOSP) director uses when deploying your OVS TC-flower hardware offload environment. Prerequisites Access to the undercloud host and credentials for the stack user. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Open the custom environment YAML file that you created in Section 8.2, "Configuring PCI passthrough devices for OVS TC-flower hardware offload" , or create a new one. Add role-specific parameters for the ComputeOvsHwOffload nodes to the custom environment file. Example Add the OvsHwOffload parameter under role-specific parameters with a value of true . Review the configuration defaults that RHOSP director uses to configure OVS hardware offload. These defaults are provided in the file, and they vary based on your mechanism driver: ML2/OVN /usr/share/openstack-tripleo-heat-templates/environment/services/neutron-ovn-sriov.yaml ML2/OVS /usr/share/openstack-tripleo-heat-templates/environment/services/neutron-sriov.yaml If you need to override any of the configuration defaults, add your overrides to the custom environment file. This custom environment file, for example, is where you can add Nova PCI whitelist values or set the network type. Example In this example, the Networking service (neutron) network type is set to VLAN and ranges are added for the tenants: If you created a new custom environment file, note its path and file name. You use this file later when you deploy your overcloud. steps Proceed to Section 8.4, "Creating a bare metal nodes definition file for OVS TC-flower hardware offload" Additional resources Supported custom roles in the Customizing your Red Hat OpenStack Platform deployment guide 8.4. Creating a bare metal nodes definition file for OVS TC-flower hardware offload Use Red Hat OpenStack Platform (RHOSP) director and a definition file to provision your bare metal nodes for your OVS TC-flower hardware offload environment. In the bare metal nodes definition file, define the quantity and attributes of the bare metal nodes that you want to deploy and assign overcloud roles to these nodes. Also define the network layout of the nodes. Prerequisites Access to the undercloud host and credentials for the stack user. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Create a bare metal nodes definition file, such as overcloud-baremetal-deploy.yaml , as instructed in Provisioning bare metal nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. In the bare metal nodes definition file, add a declaration to the Ansible playbook, cli-overcloud-node-kernelargs.yaml . The playbook contains kernel arguments to use when you provision bare metal nodes. If you want to set any extra Ansible variables when running the playbook, use the extra_vars property to set them. Note The variables that you add to extra_vars should be the same role-specific parameters for the ComputeOvsHwOffload nodes that you added to the custom environment file earlier in Section 8.3, "Adding role-specific parameters and configuration overrides for OVS TC-flower hardware offload" . Example Note the path and file name of the bare metal nodes definition file that you have created. You use this file later when you configure your NICs and as the input file for the overcloud node provision command when you provision your nodes. steps Proceed to Section 8.5, "Creating a NIC configuration template for OVS TC-flower hardware offload" . Additional resources Composable services and custom roles in Installing and managing Red Hat OpenStack Platform with director Tested NICs for NFV Bare-metal node provisioning attributes in the Installing and managing Red Hat OpenStack Platform with director guide 8.5. Creating a NIC configuration template for OVS TC-flower hardware offload Define your NIC configuration templates for an OVS TC-flower hardware offload environment by modifying copies of the sample Jinja2 templates that ship with Red Hat OpenStack Platform (RHOSP). Prerequisites Access to the undercloud host and credentials for the stack user. Ensure that the NICs, their applications, the VF guest, and OVS reside on the same NUMA Compute node. Doing so helps to prevent performance degradation from cross-NUMA operations. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Copy a sample network configuration template. Copy a NIC configuration Jinja2 template from the examples in the /usr/share/ansible/roles/tripleo_network_config/templates/ directory. Choose the one that most closely matches your NIC requirements. Modify it as needed. In your NIC configuration template, for example, single_nic_vlans.j2 , add your PF and VF interfaces. To create VFs, configure the interfaces as standalone NICs. Example Note The numvfs parameter replaces the NeutronSriovNumVFs parameter in the network configuration templates. Red Hat does not support modification of the NeutronSriovNumVFs parameter or the numvfs parameter after deployment. If you modify either parameter after deployment, the modification might cause a disruption for the running instances that have an SR-IOV port on that PF. In this case, you must hard reboot these instances to make the SR-IOV PCI device available again. Add the custom network configuration template to the bare metal nodes definition file that you created in Section 8.4, "Creating a bare metal nodes definition file for OVS TC-flower hardware offload" . Example Configure one or more network interfaces intended for hardware offload in the compute-sriov.yaml configuration file: Note Do not use the NeutronSriovNumVFs parameter when configuring OVS hardware offload. The number of virtual functions is specified using the numvfs parameter in a network configuration file used by os-net-config . Red Hat does not support modifying the numvfs setting during update or redeployment. Do not configure Mellanox network interfaces as nic-config interface type ovs-vlan because this prevents tunnel endpoints such as VXLAN from passing traffic due to driver limitations. Note the path and file name of the NIC configuration template that you have created. You use this file later if you want to partition your NICs. steps Provision your overcloud networks. For more information, see Configuring and provisioning overcloud network definitions in the Installing and managing Red Hat OpenStack Platform with director guide Provision your overcloud VIPs. For more information, see Configuring and provisioning network VIPs for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide Provision your bare metal nodes. For more information, see Provisioning bare metal nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide Deploy your overcloud. For more information, see Section 8.6, "Deploying an OVS TC-flower hardware offload overcloud" . 8.6. Deploying an OVS TC-flower hardware offload overcloud The last step in deploying your Red Hat OpenStack Platform (RHOSP) overcloud in an OVS TC-flower hardware offload environment is to run the openstack overcloud deploy command. Inputs to the command include all of the various overcloud templates and environment files that you constructed. Prerequisites Access to the undercloud host and credentials for the stack user. Access to sudo on hosts that contain NICs. You have performed all of the steps listed in the earlier procedures in this section and have assembled all of the various heat templates and environment files to use as inputs for the overcloud deploy command. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Enter the openstack overcloud deploy command. It is important to list the inputs to the openstack overcloud deploy command in a particular order. The general rule is to specify the default heat template files first followed by your custom environment files and custom templates that contain custom configurations, such as overrides to the default properties. Add your inputs to the openstack overcloud deploy command in the following order: A custom network definition file that contains the specifications for your SR-IOV network on the overcloud, for example, network-data.yaml . For more information, see Network definition file configuration options in the Installing and managing Red Hat OpenStack Platform with director guide. A roles file that contains the Controller and ComputeOvsHwOffload roles that RHOSP director uses to deploy your OVS hardware offload environment. Example: roles_data_compute_ovshwol.yaml For more information, see Section 8.1, "Generating roles and image files for OVS TC-flower hardware offload" . An output file from provisioning your overcloud networks. Example: overcloud-networks-deployed.yaml For more information, see Configuring and provisioning overcloud network definitions in the Installing and managing Red Hat OpenStack Platform with director guide. An output file from provisioning your overcloud VIPs. Example: overcloud-vip-deployed.yaml For more information, see Configuring and provisioning network VIPs for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. An output file from provisioning bare-metal nodes. Example: overcloud-baremetal-deployed.yaml For more information, see Provisioning bare metal nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. An images file that director uses to determine where to obtain container images and how to store them. Example: overcloud_images.yaml For more information, see Section 8.1, "Generating roles and image files for OVS TC-flower hardware offload" . An environment file for the Networking service (neutron) mechanism driver and router scheme that your environment uses: ML2/OVN Distributed virtual routing (DVR): neutron-ovn-dvr-ha.yaml Centralized virtual routing: neutron-ovn-ha.yaml ML2/OVS Distributed virtual routing (DVR): neutron-ovs-dvr.yaml Centralized virtual routing: neutron-ovs.yaml An environment file for SR-IOV, depending on your mechanism driver: ML2/OVN neutron-ovn-sriov.yaml ML2/OVS neutron-sriov.yaml Note If you also have an OVS-DPDK environment, and want to locate OVS-DPDK and SR-IOV instances on the same node, include the following environment files in your deployment script: ML2/OVN neutron-ovn-dpdk.yaml ML2/OVS neutron-ovs-dpdk.yaml One or more custom environment files that contain your configuration for: PCI passthrough devices for the ComputeOvsHwOffload nodes. role-specific parameters for the ComputeOvsHwOffload nodes overrides of default configuration values for the OVS hardware offload environment. Example: ovshwol-overrides.yaml For more information, see: Section 8.2, "Configuring PCI passthrough devices for OVS TC-flower hardware offload" . Section 8.3, "Adding role-specific parameters and configuration overrides for OVS TC-flower hardware offload" . Example This excerpt from a sample openstack overcloud deploy command demonstrates the proper ordering of the command's inputs for an SR-IOV, ML2/OVN environment that uses DVR: 1 Specifies the custom network configuration. Required if you use network isolation or custom composable networks. 2 Include the generated roles data if you use custom roles or want to enable a multi-architecture cloud. Run the openstack overcloud deploy command. When the overcloud creation is finished, the RHOSP director provides details to help you access your overcloud. Verification Perform the steps in Validating your overcloud deployment in the Installing and managing Red Hat OpenStack Platform with director guide. steps Ensure that the e-switch mode for the NICs is set to switchdev . The switchdev mode establishes representor ports on the NIC that are mapped to the VFs. Important You must enable security groups and port security on switchdev ports for the connection tracking (conntrack) module to offload OpenFlow flows to hardware. Check the NIC by running this command: Example In this example, the NIC pci/0000:03:00.0 is queried: Sample output You should see output similar to the following: To set the NIC to switchdev mode, run this command: Example In this example, the e-switch mode for the NIC pci/0000:03:00.0 is set to switchdev : To allocate a port from a switchdev -enabled NIC, do the following: Log in as a RHOSP user with the admin role, and create a neutron port with a binding-profile value of capabilities , and disable port security: Important You must enable security groups and port security on switchdev ports for the connection tracking (conntrack) module to offload OpenFlow flows to hardware. Pass this port information when you create the instance. You associate the representor port with the instance VF interface and connect the representor port to OVS bridge br-int for one-time OVS data path processing. A VF port representor functions like a software version of a physical "patch panel" front-end. For more information about new instance creation, see Section 8.8, "Creating an instance in an SR-IOV or an OVS TC-flower hardware offload environment" . Apply the following configuration on the interfaces, and the representor ports, to ensure that TC Flower pushes the flow programming at the port level: Adjust the number of channels for each network interface to improve performance. A channel includes an interrupt request (IRQ) and the set of queues that trigger the IRQ. When you set the mlx5_core driver to switchdev mode, the mlx5_core driver defaults to one combined channel, which might not deliver optimal performance. On the physical function (PF) representors, enter the following command to adjust the number of CPUs available to the host. Example In this example, the number of multi-purpose channels is set to 3 on the network interface, eno3s0f0 : Additional resources Creating your overcloud in the Installing and managing Red Hat OpenStack Platform with director guide overcloud deploy in the Command line interface reference Section 8.8, "Creating an instance in an SR-IOV or an OVS TC-flower hardware offload environment" man page for ethtool man page for devlink Configuring CPU pinning on Compute nodes in Configuring the Compute service for instance creation 8.7. Creating host aggregates in an SR-IOV or an OVS TC-flower hardware offload environment For better performance in your Red Hat OpenStack Platform (RHOSP) SR-IOV or OVS TC-flower hardware offload environment, deploy guests that have CPU pinning and huge pages. You can schedule high performance instances on a subset of hosts by matching aggregate metadata with flavor metadata. Prerequisites A RHOSP overcloud configured for an SR-IOV or an OVS hardware offload environment. Your RHOSP overcloud must be configured for the AggregateInstanceExtraSpecsFilter . For more information, see Section 8.2, "Configuring PCI passthrough devices for OVS TC-flower hardware offload" . Procedure Create an aggregate group, and add relevant hosts. Define metadata, for example, sriov=true , that matches defined flavor metadata. Create a flavor. Set additional flavor properties. Note that the defined metadata, sriov=true , matches the defined metadata on the SR-IOV aggregate. Additional resources aggregate in the Command line interface reference flavor in the Command line interface reference 8.8. Creating an instance in an SR-IOV or an OVS TC-flower hardware offload environment You use several commands to create an instance in a Red Hat OpenStack Platform (RHOSP) SR-IOV or an OVS TC-flower hardware offload environment. Use host aggregates to separate high performance Compute hosts. For more information, see Section 8.7, "Creating host aggregates in an SR-IOV or an OVS TC-flower hardware offload environment" . Note Pinned CPU instances can be located on the same Compute node as unpinned instances. For more information, see Configuring CPU pinning on Compute nodes in the Configuring the Compute service for instance creation guide. Prerequisites A RHOSP overcloud configured for an SR-IOV or an OVS hardware offload environment. For OVS hardware offload environments, you must have a virtual function (VF) port or a physical function (PF) port from a RHOSP administrator to be able to create an instance. OVS hardware offload requires a binding profile to create VFs or PFs. Only RHOSP users with the admin role can use a binding profile. Procedure Create a flavor. Tip You can specify the NUMA affinity policy for PCI passthrough devices and SR-IOV interfaces by adding the extra spec hw:pci_numa_affinity_policy to your flavor. For more information, see Flavor metadata in Configuring the Compute service for instance creation . Create the network and the subnet: If you are not a RHOSP user with the admin role, your RHOSP administrator can provide you with the necessary VF or PF to create an instance. Proceed to step 5. If you are a RHOSP user with the admin role, you can create VF or PF ports: VF port: PF port that is dedicated to a single instance: This PF port is a Networking service (neutron) port but is not controlled by the Networking service, and is not visible as a network adapter because it is a PCI device that is passed through to the instance. Create an instance. Additional resources flavor create in the Command line interface reference network create in the Command line interface reference subnet create in the Command line interface reference port create in the Command line interface reference server create in the Command line interface reference 8.9. Troubleshooting OVS TC-flower hardware offload When troubleshooting a Red Hat OpenStack Platform (RHOSP) environment that uses OVS TC-flower hardware offload, review the prerequisites and configurations for the network and the interfaces. Prerequisites Linux Kernel 4.13 or newer OVS 2.8 or newer RHOSP 12 or newer Iproute 4.12 or newer Mellanox NIC firmware, for example FW ConnectX-5 16.21.0338 or newer For more information about supported prerequisites, see see the Red Hat Knowledgebase solution Network Adapter Fast Datapath Feature Support Matrix . Network configuration In a HW offload deployment, you can choose one of the following scenarios for your network configuration according to your requirements: You can base guest VMs on VXLAN and VLAN by using either the same set of interfaces attached to a bond, or a different set of NICs for each type. You can bond two ports of a Mellanox NIC by using Linux bond. You can host tenant VXLAN networks on VLAN interfaces on top of a Mellanox Linux bond. Ensure that individual NICs and bonds are members of an ovs-bridge. Refer to the following network configuration example: The following bonding configurations are supported: active-backup - mode=1 active-active or balance-xor - mode=2 802.3ad (LACP) - mode=4 The following bonding configuration is not supported: xmit_hash_policy=layer3+4 Interface configuration Use the following procedure to verify the interface configuration. Procedure During deployment, use the host network configuration tool os-net-config to enable hw-tc-offload . Enable hw-tc-offload on the sriov_config service any time you reboot the Compute node. Set the hw-tc-offload parameter to on for the NICs that are attached to the bond:. Example Interface mode Verify the interface mode by using the following procedure. Procedure Set the eswitch mode to switchdev for the interfaces you use for HW offload. Use the host network configuration tool os-net-config to enable eswitch during deployment. Enable eswitch on the sriov_config service any time you reboot the Compute node. Example Note The driver of the PF interface is set to "mlx5e_rep" , to show that it is a representor of the e-switch uplink port. This does not affect the functionality. OVS offload state Use the following procedure to verify the OVS offload state. Enable hardware offload in OVS in the Compute node. VF representor port name To ensure consistent naming of VF representor ports, os-net-config uses udev rules to rename the ports in the <PF-name>_<VF_id> format. Procedure After deployment, verify that the VF representor ports are named correctly. Example Sample output Network traffic flow HW offloaded network flow functions in a similar way to physical switches or routers with application-specific integrated circuit (ASIC) chips. You can access the ASIC shell of a switch or router to examine the routing table and for other debugging. The following procedure uses a Broadcom chipset from a Cumulus Linux switch as an example. Replace the values that are appropriate to your environment. Procedure To get Broadcom chip table content, use the bcmcmd command. Sample output Inspect the Traffic Control (TC) Layer. Sample output Examine the in_hw flags and the statistics in this output. The word hardware indicates that the hardware processes the network traffic. If you use tc-policy=none , you can check this output or a tcpdump to investigate when hardware or software handles the packets. You can see a corresponding log message in dmesg or in ovs-vswitch.log when the driver is unable to offload packets. For Mellanox, as an example, the log entries resemble syndrome messages in dmesg . Sample output In this example, the error code (0x6b1266) represents the following behavior: Sample output Systems Validate your system with the following procedure. Procedure Ensure SR-IOV and VT-d are enabled on the system. Enable IOMMU in Linux by adding intel_iommu=on to kernel parameters, for example, using GRUB. 8.10. Debugging TC-flower hardware offload flow You can use the following procedure if you encounter the following message in the ovs-vswitch.log file: Procedure To enable logging on the offload modules and to get additional log information for this failure, use the following commands on the Compute node: Inspect the ovs-vswitchd logs again to see additional details about the issue. In the following example logs, the offload failed because of an unsupported attribute mark. Debugging Mellanox NICs Mellanox has provided a system information script, similar to a Red Hat SOS report. https://github.com/Mellanox/linux-sysinfo-snapshot/blob/master/sysinfo-snapshot.py When you run this command, you create a zip file of the relevant log information, which is useful for support cases. Procedure You can run this system information script with the following command: You can also install Mellanox Firmware Tools (MFT), mlxconfig, mlxlink and the OpenFabrics Enterprise Distribution (OFED) drivers. Useful CLI commands Use the ethtool utility with the following options to gather diagnostic information: ethtool -l <uplink representor> : View the number of channels ethtool -I <uplink/VFs> : Check statistics ethtool -i <uplink rep> : View driver information ethtool -g <uplink rep> : Check ring sizes ethtool -k <uplink/VFs> : View enabled features Use the tcpdump utility at the representor and PF ports to similarly check traffic flow. Any changes you make to the link state of the representor port, affect the VF link state also. Representor port statistics present VF statistics also. Use the below commands to get useful diagnostic information:
|
[
"source ~/stackrc",
"openstack overcloud roles generate -o roles_data_compute_ovshwol.yaml Controller Compute:ComputeOvsHwOffload",
"openstack overcloud roles generate -o /home/stack/templates/ roles_data.yaml Controller ComputeOvsDpdk ComputeOvsDpdkSriov Compute:ComputeOvsHwOffload",
"sudo openstack tripleo container image prepare --roles-file ~/templates/roles_data_compute_ovshwol.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-sriov.yaml -e ~/containers-prepare-parameter.yaml --output-env-file=/home/stack/templates/overcloud_images.yaml",
"lspci -nn -s <pci_device_address>",
"3b:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ [<vendor_id>: <product_id>] (rev 02)",
"openstack baremetal introspection data save <baremetal_node_name> | jq '.inventory.interfaces[] | .name, .vendor, .product'",
"source ~/stackrc",
"parameter_defaults: NeutronOVSFirewallDriver: iptables_hybrid ComputeOvsHwOffloadParameters: IsolCpusList: 2-9,21-29,11-19,31-39 KernelArgs: \"default_hugepagesz=1GB hugepagesz=1G hugepages=128 intel_iommu=on iommu=pt\" OvsHwOffload: true TunedProfileName: \"cpu-partitioning\" NeutronBridgeMappings: - tenant:br-tenant NovaPCIPassthrough: - vendor_id: <vendor-id> product_id: <product-id> address: <address> physical_network: \"tenant\" - vendor_id: <vendor-id> product_id: <product-id> address: <address> physical_network: \"null\" NovaReservedHostMemory: 4096 NovaComputeCpuDedicatedSet: 1-9,21-29,11-19,31-39",
"parameter_defaults: NovaSchedulerEnabledFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter - AggregateInstanceExtraSpecsFilter",
"source ~/stackrc",
"ComputeOvsHwOffloadParameters: IsolCpusList: 9-63,73-127 KernelArgs: default_hugepagesz=1GB hugepagesz=1G hugepages=100 amd_iommu=on iommu=pt numa_balancing=disable processor.max_cstate=0 isolcpus=9-63,73-127 NovaReservedHostMemory: 4096 NovaComputeCpuSharedSet: 0-8,64-72 NovaComputeCpuDedicatedSet: 9-63,73-127 TunedProfileName: \"cpu-partitioning\"",
"ComputeOvsHwOffloadParameters: IsolCpusList: 9-63,73-127 KernelArgs: default_hugepagesz=1GB hugepagesz=1G hugepages=100 amd_iommu=on iommu=pt numa_balancing=disable processor.max_cstate=0 isolcpus=9-63,73-127 NovaReservedHostMemory: 4096 NovaComputeCpuSharedSet: 0-8,64-72 NovaComputeCpuDedicatedSet: 9-63,73-127 TunedProfileName: \"cpu-partitioning\" OvsHwOffload: true",
"parameter_defaults: NeutronNetworkType: vlan NeutronNetworkVLANRanges: - tenant:22:22 - tenant:25:25 NeutronTunnelTypes: ''",
"source ~/stackrc",
"- name: ComputeOvsHwOffload ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-kernelargs.yaml",
"- name: ComputeOvsHwOffload ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-kernelargs.yaml extra_vars: kernel_args: 'default_hugepagesz=1GB hugepagesz=1G hugepages=100 amd_iommu=on iommu=pt isolcpus=9-63,73-127' tuned_isolated_cores: '9-63,73-127' tuned_profile: 'cpu-partitioning' reboot_wait_timeout: 1800",
"source ~/stackrc",
"- type: sriov_pf name: enp196s0f0np0 mtu: 9000 numvfs: 16 use_dhcp: false defroute: false nm_controlled: true hotplug: true promisc: false link_mode: switchdev",
"- name: ComputeOvsHwOffload count: 2 hostname_format: compute-%index% defaults: networks: - network: internal_api subnet: internal_api_subnet - network: tenant subnet: tenant_subnet - network: storage subnet: storage_subnet network_config: template: /home/stack/templates/single_nic_vlans.j2",
"- type: ovs_bridge name: br-tenant mtu: 9000 members: - type: sriov_pf name: p7p1 numvfs: 5 mtu: 9000 primary: true promisc: true use_dhcp: false link_mode: switchdev",
"source ~/stackrc",
"openstack overcloud deploy --log-file overcloud_deployment.log --templates /usr/share/openstack-tripleo-heat-templates/ --stack overcloud [ -n /home/stack/templates/network_data.yaml \\ ] 1 [ -r /home/stack/templates/roles_data_compute_ovshwol.yaml \\ ] 2 -e /home/stack/templates/overcloud-networks-deployed.yaml -e /home/stack/templates/overcloud-vip-deployed.yaml -e /home/stack/templates/overcloud-baremetal-deployed.yaml -e /home/stack/templates/overcloud-images.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/ neutron-ovn-dvr-ha.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/ neutron-ovn-sriov.yaml -e /home/stack/templates/ovshwol-overrides.yaml",
"sudo devlink dev eswitch show pci/0000:03:00.0",
"pci/0000:03:00.0: mode switchdev inline-mode none encap enable",
"sudo devlink dev eswitch set pci/0000:03:00.0 mode switchdev",
"openstack port create --network private --vnic-type=direct --binding-profile '{\"capabilities\": [\"switchdev\"]}' direct_port1 --disable-port-security",
"sudo ethtool -K <device-name> hw-tc-offload on",
"sudo ethtool -L enp3s0f0 combined 3",
"openstack aggregate create sriov_group openstack aggregate add host sriov_group compute-sriov-0.localdomain openstack aggregate set --property sriov=true sriov_group",
"openstack flavor create <flavor> --ram <size_mb> --disk <size_gb> --vcpus <number>",
"openstack flavor set --property sriov=true --property hw:cpu_policy=dedicated --property hw:mem_page_size=1GB <flavor>",
"openstack flavor create <flavor_name> --ram <size_mb> --disk <size_gb> --vcpus <number>",
"openstack network create <network_name> --provider-physical-network tenant --provider-network-type vlan --provider-segment <vlan_id> openstack subnet create <name> --network <network_name> --subnet-range <ip_address_cidr> --dhcp",
"openstack port create --network <network_name> --vnic-type direct --binding-profile '{\"capabilities\": [\"switchdev\"]} <port_name>",
"openstack port create --network <network_name> --vnic-type direct-physical <port_name>",
"openstack server create --flavor <flavor> --image <image_name> --nic port-id=<id> <instance_name>",
"- type: ovs_bridge name: br-offload mtu: 9000 use_dhcp: false members: - type: linux_bond name: bond-pf bonding_options: \"mode=active-backup miimon=100\" members: - type: sriov_pf name: p5p1 numvfs: 3 primary: true promisc: true use_dhcp: false defroute: false link_mode: switchdev - type: sriov_pf name: p5p2 numvfs: 3 promisc: true use_dhcp: false defroute: false link_mode: switchdev - type: vlan vlan_id: get_param: TenantNetworkVlanID device: bond-pf addresses: - ip_netmask: get_param: TenantIpSubnet",
"ethtool -k ens1f0 | grep tc-offload hw-tc-offload: on",
"devlink dev eswitch show pci/USD(ethtool -i ens1f0 | grep bus-info | cut -d ':' -f 2,3,4 | awk '{USD1=USD1};1')",
"ovs-vsctl get Open_vSwitch . other_config:hw-offload \"true\"",
"cat /etc/udev/rules.d/80-persistent-os-net-config.rules",
"This file is autogenerated by os-net-config SUBSYSTEM==\"net\", ACTION==\"add\", ATTR{phys_switch_id}!=\"\", ATTR{phys_port_name}==\"pf*vf*\", ENV{NM_UNMANAGED}=\"1\" SUBSYSTEM==\"net\", ACTION==\"add\", DRIVERS==\"?*\", KERNELS==\"0000:65:00.0\", NAME=\"ens1f0\" SUBSYSTEM==\"net\", ACTION==\"add\", ATTR{phys_switch_id}==\"98039b7f9e48\", ATTR{phys_port_name}==\"pf0vf*\", IMPORT{program}=\"/etc/udev/rep-link-name.sh USDattr{phys_port_name}\", NAME=\"ens1f0_USDenv{NUMBER}\" SUBSYSTEM==\"net\", ACTION==\"add\", DRIVERS==\"?*\", KERNELS==\"0000:65:00.1\", NAME=\"ens1f1\" SUBSYSTEM==\"net\", ACTION==\"add\", ATTR{phys_switch_id}==\"98039b7f9e49\", ATTR{phys_port_name}==\"pf1vf*\", IMPORT{program}=\"/etc/udev/rep-link-name.sh USDattr{phys_port_name}\", NAME=\"ens1f1_USDenv{NUMBER}\"",
"cl-bcmcmd l2 show",
"mac=00:02:00:00:00:08 vlan=2000 GPORT=0x2 modid=0 port=2/xe1 mac=00:02:00:00:00:09 vlan=2000 GPORT=0x2 modid=0 port=2/xe1 Hit",
"tc -s filter show dev p5p1_1 ingress",
"... filter block 94 protocol ip pref 3 flower chain 5 filter block 94 protocol ip pref 3 flower chain 5 handle 0x2 eth_type ipv4 src_ip 172.0.0.1 ip_flags nofrag in_hw in_hw_count 1 action order 1: mirred (Egress Redirect to device eth4) stolen index 3 ref 1 bind 1 installed 364 sec used 0 sec Action statistics: Sent 253991716224 bytes 169534118 pkt (dropped 0, overlimits 0 requeues 0) Sent software 43711874200 bytes 30161170 pkt Sent hardware 210279842024 bytes 139372948 pkt backlog 0b 0p requeues 0 cookie 8beddad9a0430f0457e7e78db6e0af48 no_percpu",
"[13232.860484] mlx5_core 0000:3b:00.0: mlx5_cmd_check:756:(pid 131368): SET_FLOW_TABLE_ENTRY(0x936) op_mod(0x0) failed, status bad parameter(0x3), syndrome (0x6b1266)",
"0x6B1266 | set_flow_table_entry: pop vlan and forward to uplink is not allowed",
"2020-01-31T06:22:11.257Z|00473|dpif_netlink(handler402)|ERR|failed to offload flow: Operation not supported: p6p1_5",
"ovs-appctl vlog/set dpif_netlink:file:dbg Module name changed recently (check based on the version used ovs-appctl vlog/set netdev_tc_offloads:file:dbg [OR] ovs-appctl vlog/set netdev_offload_tc:file:dbg ovs-appctl vlog/set tc:file:dbg",
"2020-01-31T06:22:11.218Z|00471|dpif_netlink(handler402)|DBG|system@ovs-system: put[create] ufid:61bd016e-eb89-44fc-a17e-958bc8e45fda recirc_id(0),dp_hash(0/0),skb_priority(0/0),in_port(7),skb_mark(0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),eth(src=fa:16:3e:d2:f5:f3,dst=fa:16:3e:c4:a3:eb),eth_type(0x0800),ipv4(src=10.1.1.8/0.0.0.0,dst=10.1.1.31/0.0.0.0,proto=1/0,tos=0/0x3,ttl=64/0,frag=no),icmp(type=0/0,code=0/0), actions:set(tunnel(tun_id=0x3d,src=10.10.141.107,dst=10.10.141.124,ttl=64,tp_dst=4789,flags(df|key))),6 2020-01-31T06:22:11.253Z|00472|netdev_tc_offloads(handler402)|DBG|offloading attribute pkt_mark isn't supported 2020-01-31T06:22:11.257Z|00473|dpif_netlink(handler402)|ERR|failed to offload flow: Operation not supported: p6p1_5",
"./sysinfo-snapshot.py --asap --asap_tc --ibdiagnet --openstack",
"ovs-appctl dpctl/dump-flows -m type=offloaded ovs-appctl dpctl/dump-flows -m tc filter show dev ens1_0 ingress tc -s filter show dev ens1_0 ingress tc monitor"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_network_functions_virtualization/config-ovs-hwol_rhosp-nfv
|
Chapter 4. Basic configuration
|
Chapter 4. Basic configuration As a storage administrator, learning the basics of configuring the Ceph Object Gateway is important. You can learn about the defaults and the embedded web server called Beast. For troubleshooting issues with the Ceph Object Gateway, you can adjust the logging and debugging output generated by the Ceph Object Gateway. Also, you can provide a High-Availability proxy for storage cluster access using the Ceph Object Gateway. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway software package. 4.1. Add a wildcard to the DNS You can add the wildcard such as hostname to the DNS record of the DNS server. Prerequisite A running Red Hat Ceph Storage cluster. Ceph Object Gateway installed. Root-level access to the admin node. Procedure To use Ceph with S3-style subdomains, add a wildcard to the DNS record of the DNS server that the ceph-radosgw daemon uses to resolve domain names: Syntax For dnsmasq , add the following address setting with a dot (.) prepended to the host name: Syntax Example For bind , add a wildcard to the DNS record: Example Restart the DNS server and ping the server with a subdomain to ensure that the ceph-radosgw daemon can process the subdomain requests: Syntax Example If the DNS server is on the local machine, you might need to modify /etc/resolv.conf by adding a nameserver entry for the local machine. Add the host name in the Ceph Object Gateway zone group: Get the zone group: Syntax Example Take a back-up of the JSON file: Example View the zonegroup.json file: Example Update the zonegroup.json file with new host name: Example Set the zone group back in the Ceph Object Gateway: Syntax Example Update the period: Example Restart the Ceph Object Gateway so that the DNS setting takes effect. Additional Resources See the The Ceph configuration database section in the Red Hat Ceph Storage Configuration Guide for more details. 4.2. The Beast front-end web server The Ceph Object Gateway provides Beast, a C/C embedded front-end web server. Beast uses the `Boost.Beast` C library to parse HTTP, and Boost.Asio for asynchronous network I/O. Additional Resources Boost C++ Libraries 4.3. Beast configuration options The following Beast configuration options can be passed to the embedded web server in the Ceph configuration file for the RADOS Gateway. Each option has a default value. If a value is not specified, the default value is empty. Option Description Default endpoint and ssl_endpoint Sets the listening address in the form address[:port] where the address is an IPv4 address string in dotted decimal form, or an IPv6 address in hexadecimal notation surrounded by square brackets. The optional port defaults to 8080 for endpoint and 443 for ssl_endpoint . It can be specified multiple times as in endpoint=[::1] endpoint=192.168.0.100:8000 . EMPTY ssl_certificate Path to the SSL certificate file used for SSL-enabled endpoints. EMPTY ssl_private_key Optional path to the private key file used for SSL-enabled endpoints. If one is not given the file specified by ssl_certificate is used as the private key. EMPTY tcp_nodelay Performance optimization in some environments. EMPTY Example /etc/ceph/ceph.conf file with Beast options using SSL: Note By default, the Beast front end writes an access log line recording all requests processed by the server to the RADOS Gateway log file. Additional Resources See Using the Beast front end for more information. 4.4. Configuring SSL for Beast You can configure the Beast front-end web server to use the OpenSSL library to provide Transport Layer Security (TLS). To use Secure Socket Layer (SSL) with Beast, you need to obtain a certificate from a Certificate Authority (CA) that matches the hostname of the Ceph Object Gateway node. Beast also requires the secret key, server certificate, and any other CA in a single .pem file. Important Prevent unauthorized access to the .pem file, because it contains the secret key hash. Important Red Hat recommends obtaining a certificate from a CA with the Subject Alternative Name (SAN) field, and a wildcard for use with S3-style subdomains. Important Red Hat recommends only using SSL with the Beast front-end web server for small to medium sized test environments. For production environments, you must use HAProxy and keepalived to terminate the SSL connection at the HAProxy. If the Ceph Object Gateway acts as a client and a custom certificate is used on the server, you can inject a custom CA by importing it on the node and then mapping the etc/pki directory into the container with the extra_container_args parameter in the Ceph Object Gateway specification file. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway software package. Installation of the OpenSSL software package. Root-level access to the Ceph Object Gateway node. Procedure Create a new file named rgw.yml in the current directory: Example Open the rgw.yml file for editing, and customize it for the environment: Syntax Example Deploy the Ceph Object Gateway using the service specification file: Example 4.5. D3N data cache Datacenter-Data-Delivery Network (D3N) uses high-speed storage, such as NVMe , to cache datasets on the access side. Such caching allows big data jobs to use the compute and fast-storage resources available on each Rados Gateway node at the edge. The Rados Gateways act as cache servers for the back-end object store (OSDs), storing data locally for reuse. Note Each time the Rados Gateway is restarted the content of the cache directory is purged. 4.5.1. Adding D3N cache directory To enable D3N cache on RGW, you need to also include the D3N cache directory in podman unit.run . Prerequisites A running Red Hat Ceph Storage cluster. Ceph Object Gateway installed. Root-level access to the admin node. A fast NVMe drive in each RGW node to serve as the local cache storage. Procedure Create a mount point for the NVMe drive. Syntax Example Create a cache directory path. Syntax Example Provide a+rwx permission to nvme-mount-path and rgw_d3n_l1_datacache_persistent_path . Syntax Example Create/Modify a RGW specification file with extra_container_args to add rgw_d3n_l1_datacache_persistent_path into podman unit.run . Syntax Example Note If there are multiple instances of RGW in a single host, then a separate rgw_d3n_l1_datacache_persistent_path has to be created for each instance and add each path in extra_container_args . Example : For two instances of RGW in each host, create two separate cache-directory under rgw_d3n_l1_datacache_persistent_path : /mnt/nvme0n1/rgw_datacache/rgw1 and /mnt/nvme0n1/rgw_datacache/rgw2 Example for "extra_container_args" in rgw specification file: Example for rgw-spec.yml: : Redeploy the RGW service: Example 4.5.2. Configuring D3N on rados gateway You can configure the D3N data cache on an existing RGW to improve the performance of big-data jobs running in Red Hat Ceph Storage clusters. Prerequisites A running Red Hat Ceph Storage cluster. Ceph Object Gateway installed. Root-level access to the admin node. A fast NVMe to serve as the cache storage. Adding the required D3N-related configuration To enable D3N on an existing RGW, the following configuration needs to be set for each Rados Gateways client : Syntax rgw_d3n_l1_local_datacache_enabled=true rgw_d3n_l1_datacache_persistent_path= path to the cache directory Example rgw_d3n_l1_datacache_size= max_size_of_cache_in_bytes Example Example procedure Create a test object: Note The test object needs to be larger than 4 MB to cache. Example Perform GET of an object: Example Verify cache creation. Cache will be created with the name consisting of object key-name within a configured rgw_d3n_l1_datacache_persistent_path . Example Once the cache is created for an object, the GET operation for that object will access from cache resulting in faster access. Example In the above example, to demonstrate the cache acceleration, we are writing to RAM drive ( /dev/shm ). Additional Resources See the Ceph subsystems default logging level values section in the Red Hat Ceph Storage Troubleshooting Guide for additional details on using high availability. See the Understanding Ceph logs section in the Red Hat Ceph Storage Troubleshooting Guide for additional details on using high availability. 4.6. Adjusting logging and debugging output Once you finish the setup procedure, check your logging output to ensure it meets your needs. By default, the Ceph daemons log to journald , and you can view the logs using the journalctl command. Alternatively, you can also have the Ceph daemons log to files, which are located under the /var/log/ceph/ CEPH_CLUSTER_ID / directory. Important Verbose logging can generate over 1 GB of data per hour. This type of logging can potentially fill up the operating system's disk, causing the operating system to stop functioning. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway software. Procedure Set the following parameter to increase the Ceph Object Gateway logging output: Syntax Example You can also modify these settings at runtime: Syntax Example Optionally, you can configure the Ceph daemons to log their output to files. Set the log_to_file , and mon_cluster_log_to_file options to true : Example Additional Resources See the Ceph debugging and logging configuration section of the Red Hat Ceph Storage Configuration Guide for more details. 4.7. Static web hosting As a storage administrator, you can configure the Ceph Object Gateway to host static websites in S3 buckets. Traditional website hosting involves configuring a web server for each website, which can use resources inefficiently when content does not change dynamically. For example, sites that do not use server-side services like PHP, servlets, databases, nodejs, and the like. This approach is substantially more economical than setting up virtual machines with web servers for each site. Prerequisites A healthy, running Red Hat Ceph Storage cluster. 4.7.1. Static web hosting assumptions Static web hosting requires at least one running Red Hat Ceph Storage cluster, and at least two Ceph Object Gateway instances for the static web sites. Red Hat assumes that each zone will have multiple gateway instances using a load balancer, such as high-availability (HA) Proxy and keepalived . Important Red Hat DOES NOT support using a Ceph Object Gateway instance to deploy both standard S3/Swift APIs and static web hosting simultaneously. Additional Resources See the High availability service section in the Red Hat Ceph Storage Object Gateway Guide for additional details on using high availability. 4.7.2. Static web hosting requirements Static web hosting functionality uses its own API, so configuring a gateway to use static web sites in S3 buckets requires the following: S3 static web hosting uses Ceph Object Gateway instances that are separate and distinct from instances used for standard S3/Swift API use cases. Gateway instances hosting S3 static web sites should have separate, non-overlapping domain names from the standard S3/Swift API gateway instances. Gateway instances hosting S3 static web sites should use separate public-facing IP addresses from the standard S3/Swift API gateway instances. Gateway instances hosting S3 static web sites load balance, and if necessary terminate SSL, using HAProxy/keepalived. 4.7.3. Static web hosting gateway setup To enable a Ceph Object Gateway for static web hosting, set the following options: Syntax Example The rgw_enable_static_website setting MUST be true . The rgw_enable_apis setting MUST enable the s3website API. The rgw_dns_name and rgw_dns_s3website_name settings must provide their fully qualified domains. If the site uses canonical name extensions, then set the rgw_resolve_cname option to true . Important The FQDNs of rgw_dns_name and rgw_dns_s3website_name MUST NOT overlap. 4.7.4. Static web hosting DNS configuration The following is an example of assumed DNS settings, where the first two lines specify the domains of the gateway instance using a standard S3 interface and point to the IPv4 and IPv6 addresses. The third line provides a wildcard CNAME setting for S3 buckets using canonical name extensions. The fourth and fifth lines specify the domains for the gateway instance using the S3 website interface and point to their IPv4 and IPv6 addresses. Note The IP addresses in the first two lines differ from the IP addresses in the fourth and fifth lines. If using Ceph Object Gateway in a multi-site configuration, consider using a routing solution to route traffic to the gateway closest to the client. The Amazon Web Service (AWS) requires static web host buckets to match the host name. Ceph provides a few different ways to configure the DNS, and HTTPS will work if the proxy has a matching certificate. Hostname to a Bucket on a Subdomain To use AWS-style S3 subdomains, use a wildcard in the DNS entry which can redirect requests to any bucket. A DNS entry might look like the following: Access the bucket name, where the bucket name is bucket1 , in the following manner: Hostname to Non-Matching Bucket Ceph supports mapping domain names to buckets without including the bucket name in the request, which is unique to Ceph Object Gateway. To use a domain name to access a bucket, map the domain name to the bucket name. A DNS entry might look like the following: Where the bucket name is bucket2 . Access the bucket in the following manner: Hostname to Long Bucket with CNAME AWS typically requires the bucket name to match the domain name. To configure the DNS for static web hosting using CNAME, the DNS entry might look like the following: Access the bucket in the following manner: Hostname to Long Bucket without CNAME If the DNS name contains other non-CNAME records, such as SOA , NS , MX or TXT , the DNS record must map the domain name directly to the IP address. For example: Access the bucket in the following manner: 4.7.5. Creating a static web hosting site To create a static website, perform the following steps: Create an S3 bucket. The bucket name MIGHT be the same as the website's domain name. For example, mysite.com may have a bucket name of mysite.com . This is required for AWS, but it is NOT required for Ceph. See the Static web hosting DNS configuration section in the Red Hat Ceph Storage Object Gateway Guide for details. Upload the static website content to the bucket. Contents may include HTML, CSS, client-side JavaScript, images, audio/video content, and other downloadable files. A website MUST have an index.html file and might have an error.html file. Verify the website's contents. At this point, only the creator of the bucket has access to the contents. Set permissions on the files so that they are publicly readable. 4.8. High availability for the Ceph Object Gateway As a storage administrator, you can assign many instances of the Ceph Object Gateway to a single zone. This allows you to scale out as the load increases, that is, the same zone group and zone; however, you do not need a federated architecture to use a highly available proxy. Since each Ceph Object Gateway daemon has its own IP address, you can use the ingress service to balance the load across many Ceph Object Gateway daemons or nodes. The ingress service manages HAProxy and keepalived daemons for the Ceph Object Gateway environment. You can also terminate HTTPS traffic at the HAProxy server, and use HTTP between the HAProxy server and the Beast front-end web server instances for the Ceph Object Gateway. Prerequisites At least two Ceph Object Gateway daemons running on different hosts. Capacity for at least two instances of the ingress service running on different hosts. 4.8.1. High availability service The ingress service provides a highly available endpoint for the Ceph Object Gateway. The ingress service can be deployed to any number of hosts as needed. Red Hat recommends having at least two supported Red Hat Enterprise Linux servers, each server configured with the ingress service. You can run a high availability (HA) service with a minimum set of configuration options. The Ceph orchestrator deploys the ingress service, which manages the haproxy and keepalived daemons, by providing load balancing with a floating virtual IP address. The active haproxy distributes all Ceph Object Gateway requests to all the available Ceph Object Gateway daemons. A virtual IP address is automatically configured on one of the ingress hosts at a time, known as the primary host. The Ceph orchestrator selects the first network interface based on existing IP addresses that are configured as part of the same subnet. In cases where the virtual IP address does not belong to the same subnet, you can define a list of subnets for the Ceph orchestrator to match with existing IP addresses. If the keepalived daemon and the active haproxy are not responding on the primary host, then the virtual IP address moves to a backup host. This backup host becomes the new primary host. Warning Currently, you can not configure a virtual IP address on a network interface that does not have a configured IP address. Important To use the secure socket layer (SSL), SSL must be terminated by the ingress service and not at the Ceph Object Gateway. 4.8.2. Configuring high availability for the Ceph Object Gateway To configure high availability (HA) for the Ceph Object Gateway you write a YAML configuation file, and the Ceph orchestrator does the installation, configuraton, and management of the ingress service. The ingress service uses the haproxy and keepalived daemons to provide high availability for the Ceph Object Gateway. Prerequisites A minimum of two hosts running Red Hat Enterprise Linux 8, or higher, for installing the ingress service on. A healthy running Red Hat Ceph Storage cluster. A minimum of two Ceph Object Gateway daemons running on different hosts. Root-level access to the host running the ingress service. If using a firewall, then open port 80 for HTTP and port 443 for HTTPS traffic. Procedure Create a new ingress.yaml file: Example Open the ingress.yaml file for editing. Added the following options, and add values applicable to the environment: Syntax 1 Must be set to ingress . 2 Must match the existing Ceph Object Gateway service name. 3 Where to deploy the haproxy and keepalived containers. 4 The virtual IP address where the ingress service is available. 5 The port to access the ingress service. 6 The port to access the haproxy load balancer status. 7 Optional list of available subnets. 8 Optional SSL certificate and private key. Example Launch the Cephadm shell: Example Configure the latest haproxy and keepalived images: Syntax Red Hat Enterprise Linux 8 Red Hat Enterprise Linux 9 Install and configure the new ingress service using the Ceph orchestrator: After the Ceph orchestrator completes, verify the HA configuration. On the host running the ingress service, check that the virtual IP address appears: Example Try reaching the Ceph Object Gateway from a Ceph client: Syntax Example If this returns an index.html with similar content as in the example below, then the HA configuration for the Ceph Object Gateway is working properly. Example Additional resources See the Performing a Standard RHEL Installation Guide for more details. See the High availability service section in the Red Hat Ceph Storage Object Gateway Guide for more details.
|
[
"bucket-name.domain-name.com",
"address=/. HOSTNAME_OR_FQDN / HOST_IP_ADDRESS",
"address=/.gateway-host01/192.168.122.75",
"USDTTL 604800 @ IN SOA gateway-host01. root.gateway-host01. ( 2 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS gateway-host01. @ IN A 192.168.122.113 * IN CNAME @",
"ping mybucket. HOSTNAME",
"ping mybucket.gateway-host01",
"radosgw-admin zonegroup get --rgw-zonegroup= ZONEGROUP_NAME > zonegroup.json",
"radosgw-admin zonegroup get --rgw-zonegroup=us > zonegroup.json",
"cp zonegroup.json zonegroup.backup.json",
"cat zonegroup.json { \"id\": \"d523b624-2fa5-4412-92d5-a739245f0451\", \"name\": \"asia\", \"api_name\": \"asia\", \"is_master\": \"true\", \"endpoints\": [], \"hostnames\": [], \"hostnames_s3website\": [], \"master_zone\": \"d2a3b90f-f4f3-4d38-ac1f-6463a2b93c32\", \"zones\": [ { \"id\": \"d2a3b90f-f4f3-4d38-ac1f-6463a2b93c32\", \"name\": \"india\", \"endpoints\": [], \"log_meta\": \"false\", \"log_data\": \"false\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\", \"tier_type\": \"\", \"sync_from_all\": \"true\", \"sync_from\": [], \"redirect_zone\": \"\" } ], \"placement_targets\": [ { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"STANDARD\" ] } ], \"default_placement\": \"default-placement\", \"realm_id\": \"d7e2ad25-1630-4aee-9627-84f24e13017f\", \"sync_policy\": { \"groups\": [] } }",
"\"hostnames\": [\"host01\", \"host02\",\"host03\"],",
"radosgw-admin zonegroup set --rgw-zonegroup= ZONEGROUP_NAME --infile=zonegroup.json",
"radosgw-admin zonegroup set --rgw-zonegroup=us --infile=zonegroup.json",
"radosgw-admin period update --commit",
"[client.rgw.node1] rgw frontends = beast ssl_endpoint=192.168.0.100:443 ssl_certificate=<path to SSL certificate>",
"touch rgw.yml",
"service_type: rgw service_id: SERVICE_ID service_name: SERVICE_NAME placement: hosts: - HOST_NAME spec: ssl: true rgw_frontend_ssl_certificate: CERT_HASH",
"service_type: rgw service_id: foo service_name: rgw.foo placement: hosts: - host01 spec: ssl: true rgw_frontend_ssl_certificate: | -----BEGIN RSA PRIVATE KEY----- MIIEpAIBAAKCAQEA+Cf4l9OagD6x67HhdCy4Asqw89Zz9ZuGbH50/7ltIMQpJJU0 gu9ObNtIoC0zabJ7n1jujueYgIpOqGnhRSvsGJiEkgN81NLQ9rqAVaGpadjrNLcM bpgqJCZj0vzzmtFBCtenpb5l/EccMFcAydGtGeLP33SaWiZ4Rne56GBInk6SATI/ JSKweGD1y5GiAWipBR4C74HiAW9q6hCOuSdp/2WQxWT3T1j2sjlqxkHdtInUtwOm j5Ism276IndeQ9hR3reFR8PJnKIPx73oTBQ7p9CMR1J4ucq9Ny0J12wQYT00fmJp -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEBTCCAu2gAwIBAgIUGfYFsj8HyA9Zv2l600hxzT8+gG4wDQYJKoZIhvcNAQEL BQAwgYkxCzAJBgNVBAYTAklOMQwwCgYDVQQIDANLQVIxDDAKBgNVBAcMA0JMUjEM MAoGA1UECgwDUkhUMQswCQYDVQQLDAJCVTEkMCIGA1UEAwwbY2VwaC1zc2wtcmhj czUtOGRjeHY2LW5vZGU1MR0wGwYJKoZIhvcNAQkBFg5hYmNAcmVkaGF0LmNvbTAe -----END CERTIFICATE-----",
"ceph orch apply -i rgw.yml",
"mkfs.ext4 nvme-drive-path",
"mkfs.ext4 /dev/nvme0n1 mount /dev/nvme0n1 /mnt/nvme0n1/",
"mkdir <nvme-mount-path>/cache-directory-name",
"mkdir /mnt/nvme0n1/rgw_datacache",
"chmod a+rwx nvme-mount-path ; chmod a+rwx rgw_d3n_l1_datacache_persistent_path",
"chmod a+rwx /mnt/nvme0n1 ; chmod a+rwx /mnt/nvme0n1/rgw_datacache/",
"\"extra_container_args: \"-v\" \"rgw_d3n_l1_datacache_persistent_path:rgw_d3n_l1_datacache_persistent_path\" \"",
"cat rgw-spec.yml service_type: rgw service_id: rgw.test placement: hosts: host1 host2 extra_container_args: \"-v\" \"/mnt/nvme0n1/rgw_datacache/:/mnt/nvme0n1/rgw_datacache/\"",
"\"extra_container_args: \"-v\" \"/mnt/nvme0n1/rgw_datacache/rgw1/:/mnt/nvme0n1/rgw_datacache/rgw1/\" \"-v\" \"/mnt/nvme0n1/rgw_datacache/rgw2/:/mnt/nvme0n1/rgw_datacache/rgw2/\" \"",
"cat rgw-spec.yml service_type: rgw service_id: rgw.test placement: hosts: host1 host2 count_per_host: 2 extra_container_args: \"-v\" \"/mnt/nvme0n1/rgw_datacache/rgw1/:/mnt/nvme0n1/rgw_datacache/rgw1/\" \"-v\" \"/mnt/nvme0n1/rgw_datacache/rgw2/:/mnt/nvme0n1/rgw_datacache/rgw2/\"",
"ceph orch apply -i rgw-spec.yml",
"ceph config set <client.rgw> <CONF-OPTION> <VALUE>",
"rgw_d3n_l1_datacache_persistent_path=/mnt/nvme/rgw_datacache/",
"rgw_d3n_l1_datacache_size=10737418240",
"fallocate -l 1G ./1G.dat s3cmd mb s3://bkt s3cmd put ./1G.dat s3://bkt",
"s3cmd get s3://bkt/1G.dat /dev/shm/1G_get.dat download: 's3://bkt/1G.dat' -> './1G_get.dat' [1 of 1] 1073741824 of 1073741824 100% in 13s 73.94 MB/s done",
"ls -lh /mnt/nvme/rgw_datacache rw-rr. 1 ceph ceph 1.0M Jun 2 06:18 cc7f967c-0021-43b2-9fdf-23858e868663.615391.1_shadow.ZCiCtMWeu_19wb100JIEZ-o4tv2IyA_1",
"s3cmd get s3://bkt/1G.dat /dev/shm/1G_get.dat download: 's3://bkt/1G.dat' -> './1G_get.dat' [1 of 1] 1073741824 of 1073741824 100% in 6s 155.07 MB/s done",
"ceph config set client.rgw debug_rgw VALUE",
"ceph config set client.rgw debug_rgw 20",
"ceph --admin-daemon /var/run/ceph/ceph-client.rgw. NAME .asok config set debug_rgw VALUE",
"ceph --admin-daemon /var/run/ceph/ceph-client.rgw.rgw.asok config set debug_rgw 20",
"ceph config set global log_to_file true ceph config set global mon_cluster_log_to_file true",
"ceph config set client.rgw OPTION VALUE",
"ceph config set client.rgw rgw_enable_static_website true ceph config set client.rgw rgw_enable_apis s3,s3website ceph config set client.rgw rgw_dns_name objects-zonegroup.example.com ceph config set client.rgw rgw_dns_s3website_name objects-website-zonegroup.example.com ceph config set client.rgw rgw_resolve_cname true",
"objects-zonegroup.domain.com. IN A 192.0.2.10 objects-zonegroup.domain.com. IN AAAA 2001:DB8::192:0:2:10 *.objects-zonegroup.domain.com. IN CNAME objects-zonegroup.domain.com. objects-website-zonegroup.domain.com. IN A 192.0.2.20 objects-website-zonegroup.domain.com. IN AAAA 2001:DB8::192:0:2:20",
"*.objects-website-zonegroup.domain.com. IN CNAME objects-website-zonegroup.domain.com.",
"http://bucket1.objects-website-zonegroup.domain.com",
"www.example.com. IN CNAME bucket2.objects-website-zonegroup.domain.com.",
"http://www.example.com",
"www.example.com. IN CNAME www.example.com.objects-website-zonegroup.domain.com.",
"http://www.example.com",
"www.example.com. IN A 192.0.2.20 www.example.com. IN AAAA 2001:DB8::192:0:2:20",
"http://www.example.com",
"[root@host01 ~] touch ingress.yaml",
"service_type: ingress 1 service_id: SERVICE_ID 2 placement: 3 hosts: - HOST1 - HOST2 - HOST3 spec: backend_service: SERVICE_ID virtual_ip: IP_ADDRESS / CIDR 4 frontend_port: INTEGER 5 monitor_port: INTEGER 6 virtual_interface_networks: 7 - IP_ADDRESS / CIDR ssl_cert: | 8",
"service_type: ingress service_id: rgw.foo placement: hosts: - host01.example.com - host02.example.com - host03.example.com spec: backend_service: rgw.foo virtual_ip: 192.168.1.2/24 frontend_port: 8080 monitor_port: 1967 virtual_interface_networks: - 10.10.0.0/16 ssl_cert: | -----BEGIN CERTIFICATE----- MIIEpAIBAAKCAQEA+Cf4l9OagD6x67HhdCy4Asqw89Zz9ZuGbH50/7ltIMQpJJU0 gu9ObNtIoC0zabJ7n1jujueYgIpOqGnhRSvsGJiEkgN81NLQ9rqAVaGpadjrNLcM bpgqJCZj0vzzmtFBCtenpb5l/EccMFcAydGtGeLP33SaWiZ4Rne56GBInk6SATI/ JSKweGD1y5GiAWipBR4C74HiAW9q6hCOuSdp/2WQxWT3T1j2sjlqxkHdtInUtwOm j5Ism276IndeQ9hR3reFR8PJnKIPx73oTBQ7p9CMR1J4ucq9Ny0J12wQYT00fmJp -----END CERTIFICATE----- -----BEGIN PRIVATE KEY----- MIIEBTCCAu2gAwIBAgIUGfYFsj8HyA9Zv2l600hxzT8+gG4wDQYJKoZIhvcNAQEL BQAwgYkxCzAJBgNVBAYTAklOMQwwCgYDVQQIDANLQVIxDDAKBgNVBAcMA0JMUjEM MAoGA1UECgwDUkhUMQswCQYDVQQLDAJCVTEkMCIGA1UEAwwbY2VwaC1zc2wtcmhj czUtOGRjeHY2LW5vZGU1MR0wGwYJKoZIhvcNAQkBFg5hYmNAcmVkaGF0LmNvbTAe -----END PRIVATE KEY-----",
"cephadm shell --mount ingress.yaml:/var/lib/ceph/radosgw/ingress.yaml",
"ceph config set mgr mgr/cephadm/container_image_haproxy HAPROXY_IMAGE_ID ceph config set mgr mgr/cephadm/container_image_keepalived KEEPALIVED_IMAGE_ID",
"ceph config set mgr mgr/cephadm/container_image_haproxy registry.redhat.io/rhceph/rhceph-haproxy-rhel8:latest ceph config set mgr mgr/cephadm/container_image_keepalived registry.redhat.io/rhceph/keepalived-rhel8:latest",
"ceph config set mgr mgr/cephadm/container_image_haproxy registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest ceph config set mgr mgr/cephadm/container_image_keepalived registry.redhat.io/rhceph/keepalived-rhel9:latest",
"ceph orch apply -i /var/lib/ceph/radosgw/ingress.yaml",
"ip addr show",
"wget HOST_NAME",
"wget host01.example.com",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <ListAllMyBucketsResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"> <Owner> <ID>anonymous</ID> <DisplayName></DisplayName> </Owner> <Buckets> </Buckets> </ListAllMyBucketsResult>"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/object_gateway_guide/basic-configuration
|
3.9. Software Collection Log File Support
|
3.9. Software Collection Log File Support By default, programs packaged in a Software Collection create log files in the /opt/ provider /%{scl}/root/var/log/ directory. To make log files more accessible and easier to manage, you are advised to use the nfsmountable macro that redefines the _localstatedir macro. This results in log files being created underneath the /var/opt/ provider /%{scl}/log/ directory, outside of the /opt/ provider /%{scl} file system hierarchy. For example, a service mydaemon normally stores its log file in /var/log/mydaemon/mydaemond.log in the base system installation. When mydaemon is packaged as a software_collection Software Collection and the nfsmountable macro is defined, the path to the log file in software_collection is as follows: For more information on using the nfsmountable macro, see Section 3.1, "Using Software Collections over NFS" .
|
[
"/var/opt/ provider / software_collection /log/ mydaemon / mydaemond.log"
] |
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/sect-software_collection_log_file_support
|
Chapter 67. ListenerStatus schema reference
|
Chapter 67. ListenerStatus schema reference Used in: KafkaStatus Property Property type Description type string The type property has been deprecated. The type property is not used anymore. Use the name property with the same value. The name of the listener. name string The name of the listener. addresses ListenerAddress array A list of the addresses for this listener. bootstrapServers string A comma-separated list of host:port pairs for connecting to the Kafka cluster using this listener. certificates string array A list of TLS certificates which can be used to verify the identity of the server when connecting to the given listener. Set only for tls and external listeners.
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-ListenerStatus-reference
|
Preface
|
Preface Preface
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/kafka_configuration_tuning/preface
|
Preface
|
Preface Red Hat Ansible Automation Platform is a unified automation solution that automates a variety of IT processes, including provisioning, configuration management, application deployment, orchestration, and security and compliance changes (including patching systems). Ansible Automation Platform features a platform interface where you can set up centralized authentication, configure access management, and execute automation tasks from a single location. This guide will help you get started with Ansible Automation Platform by introducing three central concepts: automation execution, automation decisions, and automation content.
| null |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/getting_started_with_ansible_automation_platform/pr01
|
4.6. Securing Virtual Private Networks (VPNs) Using Libreswan
|
4.6. Securing Virtual Private Networks (VPNs) Using Libreswan In Red Hat Enterprise Linux 7, a Virtual Private Network ( VPN ) can be configured using the IPsec protocol which is supported by the Libreswan application. Libreswan is a continuation of the Openswan application and many examples from the Openswan documentation are interchangeable with Libreswan . The NetworkManager IPsec plug-in is called NetworkManager-libreswan . Users of GNOME Shell should install the NetworkManager-libreswan-gnome package, which has NetworkManager-libreswan as a dependency. Note that the NetworkManager-libreswan-gnome package is only available from the Optional channel. See Enabling Supplementary and Optional Repositories . The IPsec protocol for VPN is itself configured using the Internet Key Exchange ( IKE ) protocol. The terms IPsec and IKE are used interchangeably. An IPsec VPN is also called an IKE VPN, IKEv2 VPN, XAUTH VPN, Cisco VPN or IKE/IPsec VPN. A variant of an IPsec VPN that also uses the Level 2 Tunneling Protocol ( L2TP ) is usually called an L2TP/IPsec VPN, which requires the Optional channel xl2tpd application. Libreswan is an open-source, user-space IKE implementation available in Red Hat Enterprise Linux 7. IKE version 1 and 2 are implemented as a user-level daemon. The IKE protocol itself is also encrypted. The IPsec protocol is implemented by the Linux kernel and Libreswan configures the kernel to add and remove VPN tunnel configurations. The IKE protocol uses UDP port 500 and 4500. The IPsec protocol consists of two different protocols, Encapsulated Security Payload ( ESP ) which has protocol number 50, and Authenticated Header ( AH ) which as protocol number 51. The AH protocol is not recommended for use. Users of AH are recommended to migrate to ESP with null encryption. The IPsec protocol has two different modes of operation, Tunnel Mode (the default) and Transport Mode . It is possible to configure the kernel with IPsec without IKE. This is called Manual Keying . It is possible to configure manual keying using the ip xfrm commands, however, this is strongly discouraged for security reasons. Libreswan interfaces with the Linux kernel using netlink. Packet encryption and decryption happen in the Linux kernel. Libreswan uses the Network Security Services ( NSS ) cryptographic library. Both libreswan and NSS are certified for use with the Federal Information Processing Standard ( FIPS ) Publication 140-2. Important IKE / IPsec VPNs, implemented by Libreswan and the Linux kernel, is the only VPN technology recommended for use in Red Hat Enterprise Linux 7. Do not use any other VPN technology without understanding the risks of doing so. 4.6.1. Installing Libreswan To install Libreswan , enter the following command as root : To check that Libreswan is installed: After a new installation of Libreswan , the NSS database should be initialized as part of the installation process. Before you start a new database, remove the old database as follows: Then, to initialize a new NSS database, enter the following command as root : Only when operating in FIPS mode, it is necessary to protect the NSS database with a password. To initialize the database for FIPS mode, instead of the command, use: To start the ipsec daemon provided by Libreswan , issue the following command as root : To confirm that the daemon is now running: To ensure that Libreswan will start when the system starts, issue the following command as root : Configure any intermediate as well as host-based firewalls to permit the ipsec service. See Chapter 5, Using Firewalls for information on firewalls and allowing specific services to pass through. Libreswan requires the firewall to allow the following packets: UDP port 500 and 4500 for the Internet Key Exchange ( IKE ) protocol Protocol 50 for Encapsulated Security Payload ( ESP ) IPsec packets Protocol 51 for Authenticated Header ( AH ) IPsec packets (uncommon) We present three examples of using Libreswan to set up an IPsec VPN. The first example is for connecting two hosts together so that they may communicate securely. The second example is connecting two sites together to form one network. The third example is supporting remote users, known as road warriors in this context. 4.6.2. Creating VPN Configurations Using Libreswan Libreswan does not use the terms " source " and " destination " or " server " and " client " since IKE/IPsec are peer to peer protocols. Instead, it uses the terms " left " and " right " to refer to end points (the hosts). This also allows the same configuration to be used on both end points in most cases, although a lot of administrators choose to always use " left " for the local host and " right " for the remote host. There are four commonly used methods for authentication of endpoints: Pre-Shared Keys ( PSK ) is the simplest authentication method. PSKs should consist of random characters and have a length of at least 20 characters. In FIPS mode, PSKs need to comply to a minimum strength requirement depending on the integrity algorithm used. It is recommended not to use PSKs shorter than 64 random characters. Raw RSA keys are commonly used for static host-to-host or subnet-to-subnet IPsec configurations. The hosts are manually configured with each other's public RSA key. This method does not scale well when dozens or more hosts all need to setup IPsec tunnels to each other. X.509 certificates are commonly used for large-scale deployments where there are many hosts that need to connect to a common IPsec gateway. A central certificate authority ( CA ) is used to sign RSA certificates for hosts or users. This central CA is responsible for relaying trust, including the revocations of individual hosts or users. NULL Authentication is used to gain mesh encryption without authentication. It protects against passive attacks but does not protect against active attacks. However, since IKEv2 allows asymmetrical authentication methods, NULL Authentication can also be used for internet scale Opportunistic IPsec, where clients authenticate the server, but servers do not authenticate the client. This model is similar to secure websites using TLS (also known as https:// websites). In addition to these authentication methods, an additional authentication can be added to protect against possible attacks by quantum computers. This additional authentication method is called Postquantum Preshared Keys ( PPK . Individual clients or groups of clients can use their own PPK by specifying a ( PPKID that corresponds to an out-of-band configured PreShared Key. See Section 4.6.9, "Using the Protection against Quantum Computers" . 4.6.3. Creating Host-To-Host VPN Using Libreswan To configure Libreswan to create a host-to-host IPsec VPN, between two hosts referred to as " left " and " right " , enter the following commands as root on both of the hosts ( " left " and " right " ) to create new raw RSA key pairs: This generates an RSA key pair for the host. The process of generating RSA keys can take many minutes, especially on virtual machines with low entropy. To view the host public key so it can be specified in a configuration as the " left " side, issue the following command as root on the host where the new hostkey was added, using the CKAID returned by the " newhostkey " command: You will need this key to add to the configuration file on both hosts as explained below. If you forgot the CKAID, you can obtain a list of all host keys on a machine using: The secret part of the keypair is stored inside the " NSS database " which resides in /etc/ipsec.d/*.db . To make a configuration file for this host-to-host tunnel, the lines leftrsasigkey= and rightrsasigkey= from above are added to a custom configuration file placed in the /etc/ipsec.d/ directory. Using an editor running as root , create a file with a suitable name in the following format: /etc/ipsec.d/my_host-to-host.conf Edit the file as follows: Public keys can also be configured by their CKAID instead of by their RSAID. In that case use " leftckaid= " instead of " leftrsasigkey= " You can use the identical configuration file on both left and right hosts. Libreswan automatically detects if it is " left " or " right " based on the specified IP addresses or hostnames. If one of the hosts is a mobile host, which implies the IP address is not known in advance, then on the mobile client use %defaultroute as its IP address. This will pick up the dynamic IP address automatically. On the static server host that accepts connections from incoming mobile hosts, specify the mobile host using %any for its IP address. Ensure the leftrsasigkey value is obtained from the " left " host and the rightrsasigkey value is obtained from the " right " host. The same applies when using leftckaid and rightckaid . Restart ipsec to ensure it reads the new configuration and if configured to start on boot, to confirm that the tunnels establish: When using the auto=start option, the IPsec tunnel should be established within a few seconds. You can manually load and start the tunnel by entering the following commands as root : 4.6.3.1. Verifying Host-To-Host VPN Using Libreswan The IKE negotiation takes place on UDP ports 500 and 4500. IPsec packets show up as Encapsulated Security Payload (ESP) packets. The ESP protocol has no ports. When the VPN connection needs to pass through a NAT router, the ESP packets are encapsulated in UDP packets on port 4500. To verify that packets are being sent through the VPN tunnel, issue a command as root in the following format: Where interface is the interface known to carry the traffic. To end the capture with tcpdump , press Ctrl + C . Note The tcpdump command interacts a little unexpectedly with IPsec . It only sees the outgoing encrypted packet, not the outgoing plaintext packet. It does see the encrypted incoming packet, as well as the decrypted incoming packet. If possible, run tcpdump on a router between the two machines and not on one of the endpoints itself. When using the Virtual Tunnel Interface (VTI), tcpdump on the physical interface shows ESP packets, while tcpdump on the VTI interface shows the cleartext traffic. To check the tunnel is succesfully established, and additionally see how much traffic has gone through the tunnel, enter the following command as root : 4.6.4. Configuring Site-to-Site VPN Using Libreswan In order for Libreswan to create a site-to-site IPsec VPN, joining together two networks, an IPsec tunnel is created between two hosts, endpoints, which are configured to permit traffic from one or more subnets to pass through. They can therefore be thought of as gateways to the remote portion of the network. The configuration of the site-to-site VPN only differs from the host-to-host VPN in that one or more networks or subnets must be specified in the configuration file. To configure Libreswan to create a site-to-site IPsec VPN, first configure a host-to-host IPsec VPN as described in Section 4.6.3, "Creating Host-To-Host VPN Using Libreswan" and then copy or move the file to a file with a suitable name, such as /etc/ipsec.d/my_site-to-site.conf . Using an editor running as root , edit the custom configuration file /etc/ipsec.d/my_site-to-site.conf as follows: To bring the tunnels up, restart Libreswan or manually load and initiate all the connections using the following commands as root : 4.6.4.1. Verifying Site-to-Site VPN Using Libreswan Verifying that packets are being sent through the VPN tunnel is the same procedure as explained in Section 4.6.3.1, "Verifying Host-To-Host VPN Using Libreswan" . 4.6.5. Configuring Site-to-Site Single Tunnel VPN Using Libreswan Often, when a site-to-site tunnel is built, the gateways need to communicate with each other using their internal IP addresses instead of their public IP addresses. This can be accomplished using a single tunnel. If the left host, with host name west , has internal IP address 192.0.1.254 and the right host, with host name east , has internal IP address 192.0.2.254 , store the following configuration using a single tunnel to the /etc/ipsec.d/myvpn.conf file on both servers: 4.6.6. Configuring Subnet Extrusion Using Libreswan IPsec is often deployed in a hub-and-spoke architecture. Each leaf node has an IP range that is part of a larger range. Leaves communicate with each other through the hub. This is called subnet extrusion . Example 4.2. Configuring Simple Subnet Extrusion Setup In the following example, we configure the head office with 10.0.0.0/8 and two branches that use a smaller /24 subnet. At the head office: At the " branch1 " office, we use the same connection. Additionally, we use a pass-through connection to exclude our local LAN traffic from being sent through the tunnel: 4.6.7. Configuring IKEv2 Remote Access VPN Libreswan Road warriors are traveling users with mobile clients with a dynamically assigned IP address, such as laptops. These are authenticated using certificates. To avoid needing to use the old IKEv1 XAUTH protocol, IKEv2 is used in the following example: On the server: Where: left= 1.2.3.4 The 1.2.3.4 value specifies the actual IP address or host name of your server. leftcert=vpn-server.example.com This option specifies a certificate referring to its friendly name or nickname that has been used to import the certificate. Usually, the name is generated as a part of a PKCS #12 certificate bundle in the form of a .p12 file. See the pkcs12(1) and pk12util(1) man pages for more information. On the mobile client, the road warrior's device, use a slight variation of the configuration: Where: auto=start This option enables the user to connect to the VPN whenever the ipsec system service is started. Replace it with the auto=add if you want to establish the connection later. 4.6.8. Configuring IKEv1 Remote Access VPN Libreswan and XAUTH with X.509 Libreswan offers a method to natively assign IP address and DNS information to roaming VPN clients as the connection is established by using the XAUTH IPsec extension. Extended authentication (XAUTH) can be deployed using PSK or X.509 certificates. Deploying using X.509 is more secure. Client certificates can be revoked by a certificate revocation list or by Online Certificate Status Protocol ( OCSP ). With X.509 certificates, individual clients cannot impersonate the server. With a PSK, also called Group Password, this is theoretically possible. XAUTH requires the VPN client to additionally identify itself with a user name and password. For One time Passwords (OTP), such as Google Authenticator or RSA SecureID tokens, the one-time token is appended to the user password. There are three possible back ends for XAUTH: xauthby=pam This uses the configuration in /etc/pam.d/pluto to authenticate the user. Pluggable Authentication Modules (PAM) can be configured to use various back ends by itself. It can use the system account user-password scheme, an LDAP directory, a RADIUS server or a custom password authentication module. See the Using Pluggable Authentication Modules (PAM) chapter for more information. xauthby=file This uses the /etc/ipsec.d/passwd configuration file (it should not be confused with the /etc/ipsec.d/nsspassword file). The format of this file is similar to the Apache .htpasswd file and the Apache htpasswd command can be used to create entries in this file. However, after the user name and password, a third column is required with the connection name of the IPsec connection used, for example when using a conn remoteusers to offer VPN to remove users, a password file entry should look as follows: user1:USDapr1USDMIwQ3DHbUSD1I69LzTnZhnCT2DPQmAOK.:remoteusers Note When using the htpasswd command, the connection name has to be manually added after the user:password part on each line. xauthby=alwaysok The server always pretends the XAUTH user and password combination is correct. The client still has to specify a user name and a password, although the server ignores these. This should only be used when users are already identified by X.509 certificates, or when testing the VPN without needing an XAUTH back end. An example server configuration with X.509 certificates: When xauthfail is set to soft, instead of hard, authentication failures are ignored, and the VPN is setup as if the user authenticated properly. A custom updown script can be used to check for the environment variable XAUTH_FAILED . Such users can then be redirected, for example, using iptables DNAT, to a " walled garden " where they can contact the administrator or renew a paid subscription to the service. VPN clients use the modecfgdomain value and the DNS entries to redirect queries for the specified domain to these specified nameservers. This allows roaming users to access internal-only resources using the internal DNS names. Note while IKEv2 supports a comma-separated list of domain names and nameserver IP addresses using modecfgdomains and modecfgdns , the IKEv1 protocol only supports one domain name, and libreswan only supports up to two nameserver IP addresses. Optionally, to send a banner text to VPN cliens, use the modecfgbanner option. If leftsubnet is not 0.0.0.0/0 , split tunneling configuration requests are sent automatically to the client. For example, when using leftsubnet=10.0.0.0/8 , the VPN client would only send traffic for 10.0.0.0/8 through the VPN. On the client, the user has to input a user password, which depends on the backend used. For example: xauthby=file The administrator generated the password and stored it in the /etc/ipsec.d/passwd file. xauthby=pam The password is obtained at the location specified in the PAM configuration in the /etc/pam.d/pluto file. xauthby=alwaysok The password is not checked and always accepted. Use this option for testing purposes or if you want to ensure compatibility for xauth-only clients. Additional Resources For more information about XAUTH, see the Extended Authentication within ISAKMP/Oakley (XAUTH) Internet-Draft document. 4.6.9. Using the Protection against Quantum Computers Using IKEv1 with PreShared Keys provided protection against quantum attackers. The redesign of IKEv2 does not offer this protection natively. Libreswan offers the use of Postquantum Preshared Keys ( PPK ) to protect IKEv2 connections against quantum attacks. To enable optional PPK support, add ppk=yes to the connection definition. To require PPK, add ppk=insist . Then, each client can be given a PPK ID with a secret value that is communicated out-of-band (and preferably quantum safe). The PPK's should be very strong in randomness and not be based on dictionary words. The PPK ID and PPK data itself are stored in ipsec.secrets , for example: The PPKS option refers to static PPKs. There is an experimental function to use one-time-pad based Dynamic PPKs. Upon each connection, a new part of a onetime pad is used as the PPK. When used, that part of the dynamic PPK inside the file is overwritten with zeroes to prevent re-use. If there is no more one time pad material left, the connection fails. See the ipsec.secrets(5) man page for more information. Warning The implementation of dynamic PPKs is provided as a Technology Preview and this functionality should be used with caution. See the 7.5 Release Notes for more information. 4.6.10. Additional Resources The following sources of information provide additional resources regarding Libreswan and the ipsec daemon. 4.6.10.1. Installed Documentation ipsec(8) man page - Describes command options for ipsec . ipsec.conf(5) man page - Contains information on configuring ipsec . ipsec.secrets(5) man page - Describes the format of the ipsec.secrets file. ipsec_auto(8) man page - Describes the use of the auto command line client for manipulating Libreswan IPsec connections established using automatic exchanges of keys. ipsec_rsasigkey(8) man page - Describes the tool used to generate RSA signature keys. /usr/share/doc/libreswan- version / 4.6.10.2. Online Documentation https://libreswan.org The website of the upstream project. https://libreswan.org/wiki The Libreswan Project Wiki. https://libreswan.org/man/ All Libreswan man pages. NIST Special Publication 800-77: Guide to IPsec VPNs Practical guidance to organizations on implementing security services based on IPsec.
|
[
"~]# yum install libreswan",
"~]USD yum info libreswan",
"~]# systemctl stop ipsec ~]# rm /etc/ipsec.d/*db",
"~]# ipsec initnss Initializing NSS database",
"~]# certutil -N -d sql:/etc/ipsec.d Enter a password which will be used to encrypt your keys. The password should be at least 8 characters long, and should contain at least one non-alphabetic character. Enter new password: Re-enter password:",
"~]# systemctl start ipsec",
"~]USD systemctl status ipsec * ipsec.service - Internet Key Exchange (IKE) Protocol Daemon for IPsec Loaded: loaded (/usr/lib/systemd/system/ipsec.service; disabled; vendor preset: disabled) Active: active (running) since Sun 2018-03-18 18:44:43 EDT; 3s ago Docs: man:ipsec(8) man:pluto(8) man:ipsec.conf(5) Process: 20358 ExecStopPost=/usr/sbin/ipsec --stopnflog (code=exited, status=0/SUCCESS) Process: 20355 ExecStopPost=/sbin/ip xfrm state flush (code=exited, status=0/SUCCESS) Process: 20352 ExecStopPost=/sbin/ip xfrm policy flush (code=exited, status=0/SUCCESS) Process: 20347 ExecStop=/usr/libexec/ipsec/whack --shutdown (code=exited, status=0/SUCCESS) Process: 20634 ExecStartPre=/usr/sbin/ipsec --checknflog (code=exited, status=0/SUCCESS) Process: 20631 ExecStartPre=/usr/sbin/ipsec --checknss (code=exited, status=0/SUCCESS) Process: 20369 ExecStartPre=/usr/libexec/ipsec/_stackmanager start (code=exited, status=0/SUCCESS) Process: 20366 ExecStartPre=/usr/libexec/ipsec/addconn --config /etc/ipsec.conf --checkconfig (code=exited, status=0/SUCCESS) Main PID: 20646 (pluto) Status: \"Startup completed.\" CGroup: /system.slice/ipsec.service └─20646 /usr/libexec/ipsec/pluto --leak-detective --config /etc/ipsec.conf --nofork",
"~]# systemctl enable ipsec",
"~]# ipsec newhostkey --output /etc/ipsec.d/hostkey.secrets Generated RSA key pair with CKAID 14936e48e756eb107fa1438e25a345b46d80433f was stored in the NSS database",
"~]# ipsec showhostkey --left --ckaid 14936e48e756eb107fa1438e25a345b46d80433f # rsakey AQPFKElpV leftrsasigkey=0sAQPFKElpV2GdCF0Ux9Kqhcap53Kaa+uCgduoT2I3x6LkRK8N+GiVGkRH4Xg+WMrzRb94kDDD8m/BO/Md+A30u0NjDk724jWuUU215rnpwvbdAob8pxYc4ReSgjQ/DkqQvsemoeF4kimMU1OBPNU7lBw4hTBFzu+iVUYMELwQSXpremLXHBNIamUbe5R1+ibgxO19l/PAbZwxyGX/ueBMBvSQ+H0UqdGKbq7UgSEQTFa4/gqdYZDDzx55tpZk2Z3es+EWdURwJOgGiiiIFuBagasHFpeu9Teb1VzRyytnyNiJCBVhWVqsB4h6eaQ9RpAMmqBdBeNHfXwb6/hg+JIKJgjidXvGtgWBYNDpG40fEFh9USaFlSdiHO+dmGyZQ74Rg9sWLtiVdlH1YEBUtQb8f8FVry9wSn6AZqPlpGgUdtkTYUCaaifsYH4hoIA0nku4Fy/Ugej89ZdrSN7Lt+igns4FysMmBOl9Wi9+LWnfl+dm4Nc6UNgLE8kZc+8vMJGkLi4SYjk2/MFYgqGX/COxSCPBFUZFiNK7Wda0kWea/FqE1heem7rvKAPIiqMymjSmytZI9hhkCD16pCdgrO3fJXsfAUChYYSPyPQClkavvBL/wNK9zlaOwssTaKTj4Xn90SrZaxTEjpqUeQ==",
"~]# ipsec showhostkey --list < 1 > RSA keyid: AQPFKElpV ckaid: 14936e48e756eb107fa1438e25a345b46d80433f",
"conn mytunnel [email protected] left=192.1.2.23 leftrsasigkey=0sAQOrlo+hOafUZDlCQmXFrje/oZm [...] W2n417C/4urYHQkCvuIQ== [email protected] right=192.1.2.45 rightrsasigkey=0sAQO3fwC6nSSGgt64DWiYZzuHbc4 [...] D/v8t5YTQ== authby=rsasig # load and initiate automatically auto=start",
"~]# systemctl restart ipsec",
"~]# ipsec auto --add mytunnel ~]# ipsec auto --up mytunnel",
"~]# tcpdump -n -i interface esp or udp port 500 or udp port 4500 00:32:32.632165 IP 192.1.2.45 > 192.1.2.23: ESP(spi=0x63ad7e17,seq=0x1a), length 132 00:32:32.632592 IP 192.1.2.23 > 192.1.2.45: ESP(spi=0x4841b647,seq=0x1a), length 132 00:32:32.632592 IP 192.0.2.254 > 192.0.1.254: ICMP echo reply, id 2489, seq 7, length 64 00:32:33.632221 IP 192.1.2.45 > 192.1.2.23: ESP(spi=0x63ad7e17,seq=0x1b), length 132 00:32:33.632731 IP 192.1.2.23 > 192.1.2.45: ESP(spi=0x4841b647,seq=0x1b), length 132 00:32:33.632731 IP 192.0.2.254 > 192.0.1.254: ICMP echo reply, id 2489, seq 8, length 64 00:32:34.632183 IP 192.1.2.45 > 192.1.2.23: ESP(spi=0x63ad7e17,seq=0x1c), length 132 00:32:34.632607 IP 192.1.2.23 > 192.1.2.45: ESP(spi=0x4841b647,seq=0x1c), length 132 00:32:34.632607 IP 192.0.2.254 > 192.0.1.254: ICMP echo reply, id 2489, seq 9, length 64 00:32:35.632233 IP 192.1.2.45 > 192.1.2.23: ESP(spi=0x63ad7e17,seq=0x1d), length 132 00:32:35.632685 IP 192.1.2.23 > 192.1.2.45: ESP(spi=0x4841b647,seq=0x1d), length 132 00:32:35.632685 IP 192.0.2.254 > 192.0.1.254: ICMP echo reply, id 2489, seq 10, length 64",
"~]# ipsec whack --trafficstatus 006 #2: \"mytunnel\", type=ESP, add_time=1234567890, inBytes=336, outBytes=336, id='@east'",
"conn mysubnet also=mytunnel leftsubnet=192.0.1.0/24 rightsubnet=192.0.2.0/24 auto=start conn mysubnet6 also=mytunnel connaddrfamily=ipv6 leftsubnet=2001:db8:0:1::/64 rightsubnet=2001:db8:0:2::/64 auto=start conn mytunnel [email protected] left=192.1.2.23 leftrsasigkey=0sAQOrlo+hOafUZDlCQmXFrje/oZm [...] W2n417C/4urYHQkCvuIQ== [email protected] right=192.1.2.45 rightrsasigkey=0sAQO3fwC6nSSGgt64DWiYZzuHbc4 [...] D/v8t5YTQ== authby=rsasig",
"~]# ipsec auto --add mysubnet",
"~]# ipsec auto --add mysubnet6",
"~]# ipsec auto --up mysubnet 104 \"mysubnet\" #1: STATE_MAIN_I1: initiate 003 \"mysubnet\" #1: received Vendor ID payload [Dead Peer Detection] 003 \"mytunnel\" #1: received Vendor ID payload [FRAGMENTATION] 106 \"mysubnet\" #1: STATE_MAIN_I2: sent MI2, expecting MR2 108 \"mysubnet\" #1: STATE_MAIN_I3: sent MI3, expecting MR3 003 \"mysubnet\" #1: received Vendor ID payload [CAN-IKEv2] 004 \"mysubnet\" #1: STATE_MAIN_I4: ISAKMP SA established {auth=OAKLEY_RSA_SIG cipher=aes_128 prf=oakley_sha group=modp2048} 117 \"mysubnet\" #2: STATE_QUICK_I1: initiate 004 \"mysubnet\" #2: STATE_QUICK_I2: sent QI2, IPsec SA established tunnel mode {ESP=>0x9414a615 <0x1a8eb4ef xfrm=AES_128-HMAC_SHA1 NATOA=none NATD=none DPD=none}",
"~]# ipsec auto --up mysubnet6 003 \"mytunnel\" #1: received Vendor ID payload [FRAGMENTATION] 117 \"mysubnet\" #2: STATE_QUICK_I1: initiate 004 \"mysubnet\" #2: STATE_QUICK_I2: sent QI2, IPsec SA established tunnel mode {ESP=>0x06fe2099 <0x75eaa862 xfrm=AES_128-HMAC_SHA1 NATOA=none NATD=none DPD=none}",
"conn mysubnet [email protected] leftrsasigkey=0sAQOrlo+hOafUZDlCQmXFrje/oZm [...] W2n417C/4urYHQkCvuIQ== left=192.1.2.23 leftsourceip=192.0.1.254 leftsubnet=192.0.1.0/24 [email protected] rightrsasigkey=0sAQO3fwC6nSSGgt64DWiYZzuHbc4 [...] D/v8t5YTQ== right=192.1.2.45 rightsourceip=192.0.2.254 rightsubnet=192.0.2.0/24 auto=start authby=rsasig",
"conn branch1 left=1.2.3.4 leftid=@headoffice leftsubnet=0.0.0.0/0 leftrsasigkey=0sA[...] # right=5.6.7.8 rightid=@branch1 rightsubnet=10.0.1.0/24 rightrsasigkey=0sAXXXX[...] # auto=start authby=rsasig conn branch2 left=1.2.3.4 leftid=@headoffice leftsubnet=0.0.0.0/0 leftrsasigkey=0sA[...] # right=10.11.12.13 rightid=@branch2 rightsubnet=10.0.2.0/24 rightrsasigkey=0sAYYYY[...] # auto=start authby=rsasig",
"conn branch1 left=1.2.3.4 leftid=@headoffice leftsubnet=0.0.0.0/0 leftrsasigkey=0sA[...] # right=10.11.12.13 rightid=@branch2 rightsubnet=10.0.1.0/24 rightrsasigkey=0sAYYYY[...] # auto=start authby=rsasig conn passthrough left=1.2.3.4 right=0.0.0.0 leftsubnet=10.0.1.0/24 rightsubnet=10.0.1.0/24 authby=never type=passthrough auto=route",
"conn roadwarriors ikev2=insist # Support (roaming) MOBIKE clients (RFC 4555) mobike=yes fragmentation=yes left=1.2.3.4 # if access to the LAN is given, enable this, otherwise use 0.0.0.0/0 # leftsubnet=10.10.0.0/16 leftsubnet=0.0.0.0/0 leftcert=vpn-server.example.com leftid=%fromcert leftxauthserver=yes leftmodecfgserver=yes right=%any # trust our own Certificate Agency rightca=%same # pick an IP address pool to assign to remote users # 100.64.0.0/16 prevents RFC1918 clashes when remote users are behind NAT rightaddresspool=100.64.13.100-100.64.13.254 # if you want remote clients to use some local DNS zones and servers modecfgdns=\"1.2.3.4, 5.6.7.8\" modecfgdomains=\"internal.company.com, corp\" rightxauthclient=yes rightmodecfgclient=yes authby=rsasig # optionally, run the client X.509 ID through pam to allow/deny client # pam-authorize=yes # load connection, don't initiate auto=add # kill vanished roadwarriors dpddelay=1m dpdtimeout=5m dpdaction=%clear",
"conn to-vpn-server ikev2=insist # pick up our dynamic IP left=%defaultroute leftsubnet=0.0.0.0/0 leftcert=myname.example.com leftid=%fromcert leftmodecfgclient=yes # right can also be a DNS hostname right=1.2.3.4 # if access to the remote LAN is required, enable this, otherwise use 0.0.0.0/0 # rightsubnet=10.10.0.0/16 rightsubnet=0.0.0.0/0 # trust our own Certificate Agency rightca=%same authby=rsasig # allow narrowing to the server's suggested assigned IP and remote subnet narrowing=yes # Support (roaming) MOBIKE clients (RFC 4555) mobike=yes # Initiate connection auto=start",
"conn xauth-rsa ikev2=never auto=add authby=rsasig pfs=no rekey=no left=ServerIP leftcert=vpn.example.com #leftid=%fromcert leftid=vpn.example.com leftsendcert=always leftsubnet=0.0.0.0/0 rightaddresspool=10.234.123.2-10.234.123.254 right=%any rightrsasigkey=%cert modecfgdns=\"1.2.3.4,8.8.8.8\" modecfgdomains=example.com modecfgbanner=\"Authorized access is allowed\" leftxauthserver=yes rightxauthclient=yes leftmodecfgserver=yes rightmodecfgclient=yes modecfgpull=yes xauthby=pam dpddelay=30 dpdtimeout=120 dpdaction=clear ike_frag=yes # for walled-garden on xauth failure # xauthfail=soft # leftupdown=/custom/_updown",
"@west @east : PPKS \"user1\" \"thestringismeanttobearandomstr\""
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-securing_virtual_private_networks
|
Chapter 6. Managing storage devices in the web console
|
Chapter 6. Managing storage devices in the web console You can use the the web console to configure physical and virtual storage devices. This chapter provides instructions for these devices: Mounted NFS Logical Volumes RAID VDO 6.1. Prerequisites The the web console has been installed. For details, see Installing the web console . 6.2. Managing NFS mounts in the web console The the web console enables you to mount remote directories using the Network File System (NFS) protocol. NFS makes it possible to reach and mount remote directories located on the network and work with the files as if the directory was located on your physical drive. Prerequisites NFS server name or IP address. Path to the directory on the remote server. 6.2.1. Connecting NFS mounts in the web console The following steps aim to help you with connecting a remote directory to your file system using NFS. Prerequisites NFS server name or IP address. Path to the directory on the remote server. Procedure Log in to the web console. For details, see Logging in to the web console . Click Storage . Click + in the NFS mounts section. In the New NFS Mount dialog box, enter the server or IP address of the remote server. In the Path on Server field, enter the path to the directory you want to mount. In the Local Mount Point field, enter the path where you want to find the directory in your local system. Select Mount at boot . This ensures that the directory will be reachable also after the restart of the local system. Optionally, select Mount read only if you do not want to change the content. Click Add . At this point, you can open the mounted directory and verify that the content is accessible. To troubleshoot the connection, you can adjust it with the Custom Mount Options . 6.2.2. Customizing NFS mount options in the web console The following section provides you with information on how to edit an existing NFS mount and shows you where to add custom mount options. Custom mount options can help you to troubleshoot the connection or change parameters of the NFS mount such as changing timeout limits or configuring authentication. Prerequisites NFS mount added. Procedure Log in to the web console. For details, see Logging in to the web console . Click Storage . Click on the NFS mount you want to adjust. If the remote directory is mounted, click Unmount . The directory must not be mounted during the custom mount options configuration. Otherwise the web console does not save the configuration and this will cause an error. Click Edit . In the NFS Mount dialog box, select Custom mount option . Enter mount options separated by a comma. For example: nfsvers=4 - the NFS protocol version number soft - type of recovery after an NFS request times out sec=krb5 - files on the NFS server can be secured by Kerberos authentication. Both the NFS client and server have to support Kerberos authentication. For a complete list of the NFS mount options, enter man nfs in the command line. Click Apply . Click Mount . Now you can open the mounted directory and verify that the content is accessible. 6.2.3. Related information For more details on NFS, see the Network File System (NFS) . 6.3. Managing Redundant Arrays of Independent Disks in the web console Redundant Arrays of Independent Disks (RAID) represents a way how to arrange more disks into one storage. RAID protects data stored in the disks against disk failure with the following data distribution strategies: Mirroring - data are copied to two different locations. If one disk fails, you have a copy and your data is not lost. Striping - data are evenly distributed among disks. Level of protection depends on the RAID level. The RHEL web console supports the following RAID levels: RAID 0 (Stripe) RAID 1 (Mirror) RAID 4 (Dedicated parity) RAID 5 (Distributed parity) RAID 6 (Double Distributed Parity) RAID 10 (Stripe of Mirrors) For more details, see RAID Levels and Linear Support . Before you can use disks in RAID, you need to: Create a RAID. Format it with file system. Mount the RAID to the server. 6.3.1. Prerequisites The the web console is running and accessible. For details, see Installing the web console . 6.3.2. Creating RAID in the web console This procedure aims to help you with configuring RAID in the web console. Prerequisites Physical disks connected to the system. Each RAID level requires different amount of disks. Procedure Open the web console. Click Storage . Click the + icon in the RAID Devices box. In the Create RAID Device dialog box, enter a name for a new RAID. In the RAID Level drop-down list, select a level of RAID you want to use. For detailed description of RAID levels supported on the RHEL 7 system, see RAID Levels and Linear Support . In the Chunk Size drop-down list, leave the predefined value as it is. The Chunk Size value specifies how large is each block for data writing. If the chunk size is 512 KiB, the system writes the first 512 KiB to the first disk, the second 512 KiB is written to the second disk, and the third chunk will be written to the third disk. If you have three disks in your RAID, the fourth 512 KiB will be written to the first disk again. Select disks you want to use for RAID. Click Create . In the Storage section, you can see the new RAID in the RAID devices box and format it. Now you have the following options how to format and mount the new RAID in the web console: Formatting RAID Creating partitions on partition table Creating a volume group on top of RAID 6.3.3. Formatting RAID in the web console This section describes formatting procedure of the new software RAID device which is created in the RHEL web interface. Prerequisites Physical disks are connected and visible by RHEL 7. RAID is created. Consider the file system which will be used for the RAID. Consider creating of a partitioning table. Procedure Open the RHEL web console. Click Storage . In the RAID devices box, choose the RAID you want to format by clicking on it. In the RAID details screen, scroll down to the Content part. Click to the newly created RAID. Click the Format button. In the Erase drop-down list, select: Don't overwrite existing data - the RHEL web console rewrites only the disk header. Advantage of this option is speed of formatting. Overwrite existing data with zeros - the RHEL web console rewrites the whole disk with zeros. This option is slower because the program has to go through the whole disk. Use this option if the RAID includes any data and you need to rewrite it. In the Type drop-down list, select a XFS file system, if you do not have another strong preference. Enter a name of the file system. In the Mounting drop down list, select Custom . The Default option does not ensure that the file system will be mounted on the boot. In the Mount Point field, add the mount path. Select Mount at boot . Click the Format button. Formatting can take several minutes depending on the used formatting options and size of RAID. After successful finish, you can see the details of the formatted RAID on the Filesystem tab. To use the RAID, click Mount . At this point, the system uses mounted and formatted RAID. 6.3.4. Using the web console for creating a partition table on RAID RAID requires formatting as any other storage device. You have two options: Format the RAID device without partitions Create a partition table with partitions This section describes formatting RAID with the partition table on the new software RAID device created in the RHEL web interface. Prerequisites Physical disks are connected and visible by RHEL 7. RAID is created. Consider the file system used for the RAID. Consider creating a partitioning table. Procedure Open the RHEL web console. Click Storage . In the RAID devices box, select the RAID you want to edit. In the RAID details screen, scroll down to the Content part. Click to the newly created RAID. Click the Create partition table button. In the Erase drop-down list, select: Don't overwrite existing data - the RHEL web console rewrites only the disk header. Advantage of this option is speed of formatting. Overwrite existing data with zeros - the RHEL web console rewrites the whole RAID with zeros. This option is slower because the program has to go through the whole RAID. Use this option if RAID includes any data and you need to rewrite it. In the Partitioning drop-down list, select: Compatible with modern system and hard disks > 2TB (GPT) - GUID Partition Table is a modern recommended partitioning system for large RAIDs with more than four partitions. Compatible with all systems and devices (MBR) - Master Boot Record works with disks up to 2 TB in size. MBR also support four primary partitions max. Click Format . At this point, the partitioning table has been created and you can create partitions. For creating partitions, see Using the web console for creating partitions on RAID . 6.3.5. Using the web console for creating partitions on RAID This section describes creating a partition in the existing partition table. Prerequisites Partition table is created. For details, see Using the web console for creating partitions on RAID . Procedure Open the web console. Click Storage . In the RAID devices box, click to the RAID you want to edit. In the RAID details screen, scroll down to the Content part. Click to the newly created RAID. Click Create Partition . In the Create partition dialog box, set up the size of the first partition. In the Erase drop-down list, select: Don't overwrite existing data - the RHEL web console rewrites only the disk header. Advantage of this option is speed of formatting. Overwrite existing data with zeros - the RHEL web console rewrites the whole RAID with zeros. This option is slower because the program have to go through the whole RAID. Use this option if RAID includes any data and you need to rewrite it. In the Type drop-down list, select a XFS file system, if you do not have another strong preference. For details about the XFS file system, see The XFS file system . Enter any name for the file system. Do not use spaces in the name. In the Mounting drop down list, select Custom . The Default option does not ensure that the file system will be mounted on the boot. In the Mount Point field, add the mount path. Select Mount at boot . Click Create partition . Formatting can take several minutes depending on used formatting options and size of RAID. After successful finish, you can continue with creating other partitions. At this point, the system uses mounted and formatted RAID. 6.3.6. Using the web console for creating a volume group on top of RAID This section shows you how to build a volume group from software RAID. Prerequisites RAID device, which is not formatted and mounted. Procedure Open the RHEL web console. Click Storage . Click the + icon in the Volume Groups box. In the Create Volume Group dialog box, enter a name for the new volume group. In the Disks list, select a RAID device. If you do not see the RAID in the list, unmount the RAID from the system. The RAID device must not be used by the RHEL system. Click Create . The new volume group has been created and you can continue with creating a logical volume. For details, see Creating logical volumes in the web console . 6.4. Using the web console for configuring LVM logical volumes Red Hat Enterprise Linux 7 supports the LVM logical volume manager. When you install a Red Hat Enterprise Linux 7, it will be installed on LVM automatically created during the installation. The screenshot shows you a clean installation of the RHEL system with two logical volumes in the the web console automatically created during the installation. To find out more about logical volumes, follow the sections describing: What is logical volume manager and when to use it. What are volume groups and how to create them. What are logical volumes and how to create them. How to format logical volumes. How to resize logical volumes. 6.4.1. Prerequisites Physical drives, RAID devices, or any other type of block device from which you can create the logical volume. 6.4.2. Logical Volume Manager in the web console The web console provides a graphical interface to create LVM volume groups and logical volumes. Volume groups create a layer between physical and logical volumes. It makes you possible to add or remove physical volumes without influencing logical volume itself. Volume groups appear as one drive with capacity consisting of capacities of all physical drives included in the group. You can join physical drives into volume groups in the web console. Logical volumes act as a single physical drive and it is built on top of a volume group in your system. Main advantages of logical volumes are: Better flexibility than the partitioning system used on your physical drive. Ability to connect more physical drives into one volume. Possibility of expanding (growing) or reducing (shrinking) capacity of the volume on-line, without restart. Ability to create snapshots. Additional resources For details, see Logical volume manager administration . 6.4.3. Creating volume groups in the web console The following describes creating volume groups from one or more physical drives or other storage devices. Logical volumes are created from volume groups. Each volume group can include multiple logical volumes. For details, see Volume groups . Prerequisites Physical drives or other types of storage devices from which you want to create volume groups. Procedure Log in to the web console. Click Storage . Click the + icon in the Volume Groups box. In the Name field, enter a name of a group without spaces. Select the drives you want to combine to create the volume group. It might happen that you cannot see devices as you expected. The RHEL web console displays only unused block devices. Used devices means, for example: Devices formatted with a file system Physical volumes in another volume group Physical volumes being a member of another software RAID device If you do not see the device, format it to be empty and unused. Click Create . The web console adds the volume group in the Volume Groups section. After clicking the group, you can create logical volumes that are allocated from that volume group. 6.4.4. Creating logical volumes in the web console The following steps describe how to create LVM logical volumes. Prerequisites Volume group created. For details, see Creating volume groups in the web console . Procedure Log in to the web console. Click Storage . Click the volume group in which you want to create logical volumes. Click Create new Logical Volume . In the Name field, enter a name for the new logical volume without spaces. In the Purpose drop down menu, select Block device for filesystems . This configuration enables you to create a logical volume with the maximum volume size which is equal to the sum of the capacities of all drives included in the volume group. Define the size of the logical volume. Consider: How much space the system using this logical volume will need. How many logical volumes you want to create. You do not have to use the whole space. If necessary, you can grow the logical volume later. Click Create . To verify the settings, click your logical volume and check the details. At this stage, the logical volume has been created and you need to create and mount a file system with the formatting process. 6.4.5. Formatting logical volumes in the web console Logical volumes act as physical drives. To use them, you need to format them with a file system. Warning Formatting logical volumes will erase all data on the volume. The file system you select determines the configuration parameters you can use for logical volumes. For example, some the XFS file system does not support shrinking volumes. For details, see Resizing logical volumes in the web console . The following steps describe the procedure to format logical volumes. Prerequisites Logical volume created. For details, see Creating volume groups in the web console . Procedure Log in to the RHEL web console. Click Storage . Click the volume group in which the logical volume is placed. Click the logical volume. Click on the Unrecognized Data tab. Click Format . In the Erase drop down menu, select: Don't overwrite existing data - the RHEL web console rewrites only the disk header. Advantage of this option is speed of formatting. Overwrite existing data with zeros - the RHEL web console rewrites the whole disk with zeros. This option is slower because the program have to go through the whole disk. Use this option if the disk includes any data and you need to overwrite it. In the Type drop down menu, select a file system: XFS file system supports large logical volumes, switching physical drives online without outage, and growing an existing file system. Leave this file system selected if you do not have a different strong preference. XFS does not support reducing the size of a volume formatted with an XFS file system ext4 file system supports: Logical volumes Switching physical drives online without outage Growing a file system Shrinking a file system You can also select a version with the LUKS (Linux Unified Key Setup) encryption, which allows you to encrypt the volume with a passphrase. In the Name field, enter the logical volume name. In the Mounting drop down menu, select Custom . The Default option does not ensure that the file system will be mounted on the boot. In the Mount Point field, add the mount path. Select Mount at boot . Click Format . Formatting can take several minutes depending on the volume size and which formatting options are selected. after the formatting has completed successfully, you can see the details of the formatted logical volume on the Filesystem tab. To use the logical volume, click Mount . At this point, the system can use mounted and formatted logical volume. 6.4.6. Resizing logical volumes in the web console This section describes how to resize logical volumes. You can extend or even reduce logical volumes. Whether you can resize a logical volume depends on which file system you are using. Most file systems enable you to extend (grow) the volume online (without outage). You can also reduce (shrink) the size of logical volumes, if the logical volume contains a file system which supports shrinking. It should be available, for example, in the ext3/ext4 file systems. Warning You cannot reduce volumes that contains GFS2 or XFS filesystem. Prerequisites Existing logical volume containing a file system which supports resizing logical volumes. Procedure The following steps provide the procedure for growing a logical volume without taking the volume offline: Log in to the RHEL web console. Click Storage . Click the volume group in which the logical volume is placed. Click the logical volume. On the Volume tab, click Grow . In the Grow Logical Volume dialog box, adjust volume space. Click Grow . LVM grows the logical volume without system outage. 6.4.7. Related information For more details on creating logical volumes, see Configuring and managing logical volumes . 6.5. Using the web console for configuring thin logical volumes Thinly-provisioned logical volumes enables you to allocate more space for designated applications or servers than how much space logical volumes actually contain. For details, see Thinly-provisioned logical volumes (thin volumes) . The following sections describe: Creating pools for the thinly provisioned logical volumes. Creating thin logical volumes. Formatting thin logical volumes. 6.5.1. Prerequisites Physical drives or other types of storage devices from which you want to create volume groups. 6.5.2. Creating pools for thin logical volumes in the web console The following steps show you how to create a pool for thinly provisioned volumes: Prerequisites Volume group created . Procedure Log in to the web console. Click Storage . Click the volume group in which you want to create thin volumes. Click Create new Logical Volume . In the Name field, enter a name for the new pool of thin volumes without spaces. In the Purpose drop down menu, select Pool for thinly provisioned volumes . This configuration enables you to create the thin volume. Define the size of the pool of thin volumes. Consider: How many thin volumes you will need in this pool? What is the expected size of each thin volume? You do not have to use the whole space. If necessary, you can grow the pool later. Click Create . The pool for thin volumes has been created and you can add thin volumes. 6.5.3. Creating thin logical volumes in the web console The following text describes creating a thin logical volume in the pool. The pool can include multiple thin volumes and each thin volume can be as large as the pool for thin volumes itself. Important Using thin volumes requires regular checkup of actual free physical space of the logical volume. Prerequisites Pool for thin volumes created. For details, see Creating volume groups in the web console . Procedure Log in to the web console. Click Storage . Click the volume group in which you want to create thin volumes. Click the desired pool. Click Create Thin Volume . In the Create Thin Volume dialog box, enter a name for the thin volume without spaces. Define the size of the thin volume. Click Create . At this stage, the thin logical volume has been created and you need to format it. 6.5.4. Formatting logical volumes in the web console Logical volumes act as physical drives. To use them, you need to format them with a file system. Warning Formatting logical volumes will erase all data on the volume. The file system you select determines the configuration parameters you can use for logical volumes. For example, some the XFS file system does not support shrinking volumes. For details, see Resizing logical volumes in the web console . The following steps describe the procedure to format logical volumes. Prerequisites Logical volume created. For details, see Creating volume groups in the web console . Procedure Log in to the RHEL web console. Click Storage . Click the volume group in which the logical volume is placed. Click the logical volume. Click on the Unrecognized Data tab. Click Format . In the Erase drop down menu, select: Don't overwrite existing data - the RHEL web console rewrites only the disk header. Advantage of this option is speed of formatting. Overwrite existing data with zeros - the RHEL web console rewrites the whole disk with zeros. This option is slower because the program have to go through the whole disk. Use this option if the disk includes any data and you need to overwrite it. In the Type drop down menu, select a file system: XFS file system supports large logical volumes, switching physical drives online without outage, and growing an existing file system. Leave this file system selected if you do not have a different strong preference. XFS does not support reducing the size of a volume formatted with an XFS file system ext4 file system supports: Logical volumes Switching physical drives online without outage Growing a file system Shrinking a file system You can also select a version with the LUKS (Linux Unified Key Setup) encryption, which allows you to encrypt the volume with a passphrase. In the Name field, enter the logical volume name. In the Mounting drop down menu, select Custom . The Default option does not ensure that the file system will be mounted on the boot. In the Mount Point field, add the mount path. Select Mount at boot . Click Format . Formatting can take several minutes depending on the volume size and which formatting options are selected. after the formatting has completed successfully, you can see the details of the formatted logical volume on the Filesystem tab. To use the logical volume, click Mount . At this point, the system can use mounted and formatted logical volume. 6.6. Using the web console for changing physical drives in volume groups The following text describes how to change the drive in a volume group using the the web console. The change of physical drives consists of the following procedures: Adding physical drives from logical volumes. Removing physical drives from logical volumes. 6.6.1. Prerequisites A new physical drive for replacing the old or broken one. The configuration expects that physical drives are organized in a volume group. 6.6.2. Adding physical drives to volume groups in the web console The web console enables you to add a new physical drive or other type of volume to the existing logical volume. Prerequisites A volume group must be created. A new drive connected to the machine. Procedure Log in to the web console. Click Storage . In the Volume Groups box, click the volume group in which you want to add a physical volume. In the Physical Volumes box, click the + icon. In the Add Disks dialog box, select the preferred drive and click Add . As a result, the web console adds the physical volume. You can see it in the Physical Volumes section, and the logical volume can immediately start to write on the drive. 6.6.3. Removing physical drives from volume groups in the web console If a logical volume includes multiple physical drives, you can remove one of the physical drives online. The system moves automatically all data from the drive to be removed to other drives during the removal process. Notice that it can take some time. The web console also verifies, if there is enough space for removing the physical drive. Prerequisites A volume group with more than one physical drive connected. Procedure The following steps describe how to remove a drive from the volume group without causing outage in the RHEL web console. Log in to the RHEL web console. Click Storage . Click the volume group in which you have the logical volume. In the Physical Volumes section, locate the preferred volume. Click the - icon. The RHEL web console verifies, if the logical volume has enough free space for removing the disk. If not, you cannot remove the disk and it is necessary to add another disk first. For details, see Adding physical drives to logical volumes in the web console . As results, the RHEL web console removes the physical volume from the created logical volume without causing an outage. 6.7. Using the web console for managing Virtual Data Optimizer volumes This chapter describes the Virtual Data Optimizer (VDO) configuration using the the web console. After reading it, you will be able to: Create VDO volumes Format VDO volumes Extend VDO volumes 6.7.1. Prerequisites The the web console is installed and accessible. For details, see Installing the web console . 6.7.2. VDO volumes in the web console Red Hat Enterprise Linux 7 supports Virtual Data Optimizer (VDO). VDO is a block virtualization technology that combines: Compression For details, see Using Compression . Deduplication For details, see Disabling and Re-enabling deduplication . Thin provisioning For details, see Thinly-provisioned logical volumes (thin volumes) . Using these technologies, VDO: Saves storage space inline Compresses files Eliminates duplications Enables you to allocate more virtual space than how much the physical or logical storage provides Enables you to extend the virtual storage by growing VDO can be created on top of many types of storage. In the web console, you can configure VDO on top of: LVM Note It is not possible to configure VDO on top of thinly-provisioned volumes. Physical volume Software RAID For details about placement of VDO in the Storage Stack, see System Requirements . Additional resources For details about VDO, see Deduplication and compression with VDO . 6.7.3. Creating VDO volumes in the web console This section helps you to create a VDO volume in the RHEL web console. Prerequisites Physical drives, LVMs, or RAID from which you want to create VDO. Procedure Log in to the web console. For details, see Logging in to the web console . Click Storage . Click the + icon in the VDO Devices box. In the Name field, enter a name of a VDO volume without spaces. Select the drive that you want to use. In the Logical Size bar, set up the size of the VDO volume. You can extend it more than ten times, but consider for what purpose you are creating the VDO volume: For active VMs or container storage, use logical size that is ten times the physical size of the volume. For object storage, use logical size that is three times the physical size of the volume. For details, see Getting started with VDO . In the Index Memory bar, allocate memory for the VDO volume. For details about VDO system requirements, see System Requirements . Select the Compression option. This option can efficiently reduce various file formats. For details, see Using Compression . Select the Deduplication option. This option reduces the consumption of storage resources by eliminating multiple copies of duplicate blocks. For details, see Disabling and Re-enabling deduplication . [Optional] If you want to use the VDO volume with applications that need a 512 bytes block size, select Use 512 Byte emulation . This reduces the performance of the VDO volume, but should be very rarely needed. If in doubt, leave it off. Click Create . If the process of creating the VDO volume succeeds, you can see the new VDO volume in the Storage section and format it with a file system. 6.7.4. Formatting VDO volumes in the web console VDO volumes act as physical drives. To use them, you need to format them with a file system. Warning Formatting VDO will erase all data on the volume. The following steps describe the procedure to format VDO volumes. Prerequisites A VDO volume is created. For details, see Section 6.7.3, "Creating VDO volumes in the web console" . Procedure Log in to the web console. For details, see Logging in to the web console . Click Storage . Click the VDO volume. Click on the Unrecognized Data tab. Click Format . In the Erase drop down menu, select: Don't overwrite existing data The RHEL web console rewrites only the disk header. The advantage of this option is the speed of formatting. Overwrite existing data with zeros The RHEL web console rewrites the whole disk with zeros. This option is slower because the program has to go through the whole disk. Use this option if the disk includes any data and you need to rewrite them. In the Type drop down menu, select a filesystem: The XFS file system supports large logical volumes, switching physical drives online without outage, and growing. Leave this file system selected if you do not have a different strong preference. XFS does not support shrinking volumes. Therefore, you will not be able to reduce volume formatted with XFS. The ext4 file system supports logical volumes, switching physical drives online without outage, growing, and shrinking. You can also select a version with the LUKS (Linux Unified Key Setup) encryption, which allows you to encrypt the volume with a passphrase. In the Name field, enter the logical volume name. In the Mounting drop down menu, select Custom . The Default option does not ensure that the file system will be mounted on the boot. In the Mount Point field, add the mount path. Select Mount at boot . Click Format . Formatting can take several minutes depending on the used formatting options and the volume size. After a successful finish, you can see the details of the formatted VDO volume on the Filesystem tab. To use the VDO volume, click Mount . At this point, the system uses the mounted and formatted VDO volume. 6.7.5. Extending VDO volumes in the web console This section describes extending VDO volumes in the web console. Prerequisites The VDO volume created. Procedure Log in to the web console. For details, see Logging in to the web console . Click Storage . Click your VDO volume in the VDO Devices box. In the VDO volume details, click the Grow button. In the Grow logical size of VDO dialog box, extend the logical size of the VDO volume. Original size of the logical volume from the screenshot was 6 GB. As you can see, the RHEL web console enables you to grow the volume to more than ten times the size and it works correctly because of the compression and deduplication. Click Grow . If the process of growing VDO succeeds, you can see the new size in the VDO volume details.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/managing_systems_using_the_rhel_7_web_console/managing-storage-devices-in-the-web-console_system-management-using-the-RHEL-7-web-console
|
1.2.2. Technical Controls
|
1.2.2. Technical Controls Technical controls use technology as a basis for controlling the access and usage of sensitive data throughout a physical structure and over a network. Technical controls are far-reaching in scope and encompass such technologies as: Encryption Smart cards Network authentication Access control lists (ACLs) File integrity auditing software
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s2-sgs-ov-ctrl-tech
|
Chapter 2. Configuring an Ethernet connection
|
Chapter 2. Configuring an Ethernet connection NetworkManager creates a connection profile for each Ethernet adapter that is installed in a host. By default, this profile uses DHCP for both IPv4 and IPv6 connections. Modify this automatically-created profile or add a new one in the following cases: The network requires custom settings, such as a static IP address configuration. You require multiple profiles because the host roams among different networks. Red Hat Enterprise Linux provides administrators different options to configure Ethernet connections. For example: Use nmcli to configure connections on the command line. Use nmtui to configure connections in a text-based user interface. Use the GNOME Settings menu or nm-connection-editor application to configure connections in a graphical interface. Use nmstatectl to configure connections through the Nmstate API. Use RHEL system roles to automate the configuration of connections on one or multiple hosts. Note If you want to manually configure Ethernet connections on hosts running in the Microsoft Azure cloud, disable the cloud-init service or configure it to ignore the network settings retrieved from the cloud environment. Otherwise, cloud-init will override on the reboot the network settings that you have manually configured. 2.1. Configuring an Ethernet connection by using nmcli If you connect a host to the network over Ethernet, you can manage the connection's settings on the command line by using the nmcli utility. Prerequisites A physical or virtual Ethernet Network Interface Controller (NIC) exists in the server's configuration. Procedure List the NetworkManager connection profiles: By default, NetworkManager creates a profile for each NIC in the host. If you plan to connect this NIC only to a specific network, adapt the automatically-created profile. If you plan to connect this NIC to networks with different settings, create individual profiles for each network. If you want to create an additional connection profile, enter: Skip this step to modify an existing profile. Optional: Rename the connection profile: On hosts with multiple profiles, a meaningful name makes it easier to identify the purpose of a profile. Display the current settings of the connection profile: Configure the IPv4 settings: To use DHCP, enter: Skip this step if ipv4.method is already set to auto (default). To set a static IPv4 address, network mask, default gateway, DNS servers, and search domain, enter: Configure the IPv6 settings: To use stateless address autoconfiguration (SLAAC), enter: Skip this step if ipv6.method is already set to auto (default). To set a static IPv6 address, network mask, default gateway, DNS servers, and search domain, enter: To customize other settings in the profile, use the following command: Enclose values with spaces or semicolons in quotes. Activate the profile: Verification Display the IP settings of the NIC: Display the IPv4 default gateway: Display the IPv6 default gateway: Display the DNS settings: If multiple connection profiles are active at the same time, the order of nameserver entries depend on the DNS priority values in these profiles and the connection types. Use the ping utility to verify that this host can send packets to other hosts: Troubleshooting Verify that the network cable is plugged-in to the host and a switch. Check whether the link failure exists only on this host or also on other hosts connected to the same switch. Verify that the network cable and the network interface are working as expected. Perform hardware diagnosis steps and replace defective cables and network interface cards. If the configuration on the disk does not match the configuration on the device, starting or restarting NetworkManager creates an in-memory connection that reflects the configuration of the device. For further details and how to avoid this problem, see the Red Hat Knowledgebase solution NetworkManager duplicates a connection after restart of NetworkManager service . Additional resources nm-settings(5) man page on your system 2.2. Configuring an Ethernet connection by using the nmcli interactive editor If you connect a host to the network over Ethernet, you can manage the connection's settings on the command line by using the nmcli utility. Prerequisites A physical or virtual Ethernet Network Interface Controller (NIC) exists in the server's configuration. Procedure List the NetworkManager connection profiles: By default, NetworkManager creates a profile for each NIC in the host. If you plan to connect this NIC only to a specific network, adapt the automatically-created profile. If you plan to connect this NIC to networks with different settings, create individual profiles for each network. Start nmcli in interactive mode: To create an additional connection profile, enter: To modify an existing connection profile, enter: Optional: Rename the connection profile: On hosts with multiple profiles, a meaningful name makes it easier to identify the purpose of a profile. Do not use quotes to set an ID that contains spaces to avoid that nmcli makes the quotes part of the name. For example, to set Example Connection as ID, enter set connection.id Example Connection . Display the current settings of the connection profile: If you create a new connection profile, set the network interface: Configure the IPv4 settings: To use DHCP, enter: Skip this step if ipv4.method is already set to auto (default). To set a static IPv4 address, network mask, default gateway, DNS servers, and search domain, enter: Configure the IPv6 settings: To use stateless address autoconfiguration (SLAAC), enter: Skip this step if ipv6.method is already set to auto (default). To set a static IPv6 address, network mask, default gateway, DNS servers, and search domain, enter: Save and activate the connection: Leave the interactive mode: Verification Display the IP settings of the NIC: Display the IPv4 default gateway: Display the IPv6 default gateway: Display the DNS settings: If multiple connection profiles are active at the same time, the order of nameserver entries depend on the DNS priority values in these profiles and the connection types. Use the ping utility to verify that this host can send packets to other hosts: Troubleshooting Verify that the network cable is plugged-in to the host and a switch. Check whether the link failure exists only on this host or also on other hosts connected to the same switch. Verify that the network cable and the network interface are working as expected. Perform hardware diagnosis steps and replace defective cables and network interface cards. If the configuration on the disk does not match the configuration on the device, starting or restarting NetworkManager creates an in-memory connection that reflects the configuration of the device. For further details and how to avoid this problem, see the Red Hat Knowledgebase solution NetworkManager duplicates a connection after restart of NetworkManager service . Additional resources nm-settings(5) and nmcli(1) man pages on your system 2.3. Configuring an Ethernet connection by using nmtui If you connect a host to the network over Ethernet, you can manage the connection's settings in a text-based user interface by using the nmtui application. Use nmtui to create new profiles and to update existing ones on a host without a graphical interface. Note In nmtui : Navigate by using the cursor keys. Press a button by selecting it and hitting Enter . Select and clear checkboxes by using Space . To return to the screen, use ESC . Prerequisites A physical or virtual Ethernet Network Interface Controller (NIC) exists in the server's configuration. Procedure If you do not know the network device name you want to use in the connection, display the available devices: Start nmtui : Select Edit a connection , and press Enter . Choose whether to add a new connection profile or to modify an existing one: To create a new profile: Press Add . Select Ethernet from the list of network types, and press Enter . To modify an existing profile, select the profile from the list, and press Enter . Optional: Update the name of the connection profile. On hosts with multiple profiles, a meaningful name makes it easier to identify the purpose of a profile. If you create a new connection profile, enter the network device name into the Device field. Depending on your environment, configure the IP address settings in the IPv4 configuration and IPv6 configuration areas accordingly. For this, press the button to these areas, and select: Disabled , if this connection does not require an IP address. Automatic , if a DHCP server dynamically assigns an IP address to this NIC. Manual , if the network requires static IP address settings. In this case, you must fill further fields: Press Show to the protocol you want to configure to display additional fields. Press Add to Addresses , and enter the IP address and the subnet mask in Classless Inter-Domain Routing (CIDR) format. If you do not specify a subnet mask, NetworkManager sets a /32 subnet mask for IPv4 addresses and /64 for IPv6 addresses. Enter the address of the default gateway. Press Add to DNS servers , and enter the DNS server address. Press Add to Search domains , and enter the DNS search domain. Figure 2.1. Example of an Ethernet connection with static IP address settings Press OK to create and automatically activate the new connection. Press Back to return to the main menu. Select Quit , and press Enter to close the nmtui application. Verification Display the IP settings of the NIC: Display the IPv4 default gateway: Display the IPv6 default gateway: Display the DNS settings: If multiple connection profiles are active at the same time, the order of nameserver entries depend on the DNS priority values in these profiles and the connection types. Use the ping utility to verify that this host can send packets to other hosts: Troubleshooting Verify that the network cable is plugged-in to the host and a switch. Check whether the link failure exists only on this host or also on other hosts connected to the same switch. Verify that the network cable and the network interface are working as expected. Perform hardware diagnosis steps and replace defective cables and network interface cards. If the configuration on the disk does not match the configuration on the device, starting or restarting NetworkManager creates an in-memory connection that reflects the configuration of the device. For further details and how to avoid this problem, see the Red Hat Knowledgebase solution NetworkManager duplicates a connection after restart of NetworkManager service . Additional resources Configuring NetworkManager to avoid using a specific profile to provide a default gateway Configuring the order of DNS servers 2.4. Configuring an Ethernet connection by using control-center If you connect a host to the network over Ethernet, you can manage the connection's settings with a graphical interface by using the GNOME Settings menu. Note that control-center does not support as many configuration options as the nm-connection-editor application or the nmcli utility. Prerequisites A physical or virtual Ethernet Network Interface Controller (NIC) exists in the server's configuration. GNOME is installed. Procedure Press the Super key, enter Settings , and press Enter . Select Network in the navigation on the left. Choose whether to add a new connection profile or to modify an existing one: To create a new profile, click the + button to the Ethernet entry. To modify an existing profile, click the gear icon to the profile entry. Optional: On the Identity tab, update the name of the connection profile. On hosts with multiple profiles, a meaningful name makes it easier to identify the purpose of a profile. Depending on your environment, configure the IP address settings on the IPv4 and IPv6 tabs accordingly: To use DHCP or IPv6 stateless address autoconfiguration (SLAAC), select Automatic (DHCP) as method (default). To set a static IP address, network mask, default gateway, DNS servers, and search domain, select Manual as method, and fill the fields on the tabs: Depending on whether you add or modify a connection profile, click the Add or Apply button to save the connection. The GNOME control-center automatically activates the connection. Verification Display the IP settings of the NIC: Display the IPv4 default gateway: Display the IPv6 default gateway: Display the DNS settings: If multiple connection profiles are active at the same time, the order of nameserver entries depend on the DNS priority values in these profiles and the connection types. Use the ping utility to verify that this host can send packets to other hosts: Troubleshooting steps Verify that the network cable is plugged-in to the host and a switch. Check whether the link failure exists only on this host or also on other hosts connected to the same switch. Verify that the network cable and the network interface are working as expected. Perform hardware diagnosis steps and replace defective cables and network interface cards. If the configuration on the disk does not match the configuration on the device, starting or restarting NetworkManager creates an in-memory connection that reflects the configuration of the device. For further details and how to avoid this problem, see the Red Hat Knowledgebase solution NetworkManager duplicates a connection after restart of NetworkManager service . 2.5. Configuring an Ethernet connection by using nm-connection-editor If you connect a host to the network over Ethernet, you can manage the connection's settings with a graphical interface by using the nm-connection-editor application. Prerequisites A physical or virtual Ethernet Network Interface Controller (NIC) exists in the server's configuration. GNOME is installed. Procedure Open a terminal, and enter: Choose whether to add a new connection profile or to modify an existing one: To create a new profile: Click the + button Select Ethernet as connection type, and click Create . To modify an existing profile, double-click the profile entry. Optional: Update the name of the profile in the Connection Name field. On hosts with multiple profiles, a meaningful name makes it easier to identify the purpose of a profile. If you create a new profile, select the device on the Ethernet tab: Depending on your environment, configure the IP address settings on the IPv4 Settings and IPv6 Settings tabs accordingly: To use DHCP or IPv6 stateless address autoconfiguration (SLAAC), select Automatic (DHCP) as method (default). To set a static IP address, network mask, default gateway, DNS servers, and search domain, select Manual as method, and fill the fields on the tabs: Click Save . Close nm-connection-editor . Verification Display the IP settings of the NIC: Display the IPv4 default gateway: Display the IPv6 default gateway: Display the DNS settings: If multiple connection profiles are active at the same time, the order of nameserver entries depend on the DNS priority values in these profiles and the connection types. Use the ping utility to verify that this host can send packets to other hosts: Troubleshooting steps Verify that the network cable is plugged-in to the host and a switch. Check whether the link failure exists only on this host or also on other hosts connected to the same switch. Verify that the network cable and the network interface are working as expected. Perform hardware diagnosis steps and replace defective cables and network interface cards. If the configuration on the disk does not match the configuration on the device, starting or restarting NetworkManager creates an in-memory connection that reflects the configuration of the device. For further details and how to avoid this problem, see the Red Hat Knowledgebase solution NetworkManager duplicates a connection after restart of NetworkManager service . Additional Resources Configuring NetworkManager to avoid using a specific profile to provide a default gateway Configuring the order of DNS servers 2.6. Configuring an Ethernet connection with a static IP address by using nmstatectl Use the nmstatectl utility to configure an Ethernet connection through the Nmstate API. The Nmstate API ensures that, after setting the configuration, the result matches the configuration file. If anything fails, nmstatectl automatically rolls back the changes to avoid leaving the system in an incorrect state. Prerequisites A physical or virtual Ethernet Network Interface Controller (NIC) exists in the server's configuration. The nmstate package is installed. Procedure Create a YAML file, for example ~/create-ethernet-profile.yml , with the following content: --- interfaces: - name: enp1s0 type: ethernet state: up ipv4: enabled: true address: - ip: 192.0.2.1 prefix-length: 24 dhcp: false ipv6: enabled: true address: - ip: 2001:db8:1::1 prefix-length: 64 autoconf: false dhcp: false routes: config: - destination: 0.0.0.0/0 -hop-address: 192.0.2.254 -hop-interface: enp1s0 - destination: ::/0 -hop-address: 2001:db8:1::fffe -hop-interface: enp1s0 dns-resolver: config: search: - example.com server: - 192.0.2.200 - 2001:db8:1::ffbb These settings define an Ethernet connection profile for the enp1s0 device with the following settings: A static IPv4 address - 192.0.2.1 with the /24 subnet mask A static IPv6 address - 2001:db8:1::1 with the /64 subnet mask An IPv4 default gateway - 192.0.2.254 An IPv6 default gateway - 2001:db8:1::fffe An IPv4 DNS server - 192.0.2.200 An IPv6 DNS server - 2001:db8:1::ffbb A DNS search domain - example.com Optional: You can define the identifier: mac-address and mac-address: <mac_address> properties in the interfaces property to identify the network interface card by its MAC address instead of its name, for example: --- interfaces: - name: <profile_name> type: ethernet identifier: mac-address mac-address: <mac_address> ... Apply the settings to the system: Verification Display the current state in YAML format: Display the IP settings of the NIC: Display the IPv4 default gateway: Display the IPv6 default gateway: Display the DNS settings: If multiple connection profiles are active at the same time, the order of nameserver entries depend on the DNS priority values in these profiles and the connection types. Use the ping utility to verify that this host can send packets to other hosts: Additional resources nmstatectl(8) man page on your system /usr/share/doc/nmstate/examples/ directory 2.7. Configuring an Ethernet connection with a static IP address by using the network RHEL system role with an interface name To connect a Red Hat Enterprise Linux host to an Ethernet network, create a NetworkManager connection profile for the network device. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. You can use the network RHEL system role to configure an Ethernet connection with static IP addresses, gateways, and DNS settings, and assign them to a specified interface name. Typically, administrators want to reuse a playbook and not maintain individual playbooks for each host to which Ansible should assign static IP addresses. In this case, you can use variables in the playbook and maintain the settings in the inventory. As a result, you need only one playbook to dynamically assign individual settings to multiple hosts. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. A physical or virtual Ethernet device exists in the server configuration. The managed nodes use NetworkManager to configure the network. Procedure Edit the ~/inventory file, and append the host-specific settings to the host entries: managed-node-01.example.com interface=enp1s0 ip_v4=192.0.2.1/24 ip_v6=2001:db8:1::1/64 gateway_v4=192.0.2.254 gateway_v6=2001:db8:1::fffe managed-node-02.example.com interface=enp1s0 ip_v4=192.0.2.2/24 ip_v6=2001:db8:1::2/64 gateway_v4=192.0.2.254 gateway_v6=2001:db8:1::fffe Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com,managed-node-02.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: "{{ interface }}" interface_name: "{{ interface }}" type: ethernet autoconnect: yes ip: address: - "{{ ip_v4 }}" - "{{ ip_v6 }}" gateway4: "{{ gateway_v4 }}" gateway6: "{{ gateway_v6 }}" dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com state: up This playbook reads certain values dynamically for each host from the inventory file and uses static values in the playbook for settings which are the same for all hosts. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Query the Ansible facts of the managed node and verify the active network settings: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 2.8. Configuring an Ethernet connection with a static IP address by using the network RHEL system role with a device path To connect a Red Hat Enterprise Linux host to an Ethernet network, create a NetworkManager connection profile for the network device. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. You can use the network RHEL system role to configure an Ethernet connection with static IP addresses, gateways, and DNS settings, and assign them to a device based on its path instead of its name. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. A physical or virtual Ethernet device exists in the server's configuration. The managed nodes use NetworkManager to configure the network. You know the path of the device. You can display the device path by using the udevadm info /sys/class/net/ <device_name> | grep ID_PATH= command. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: example match: path: - pci-0000:00:0[1-3].0 - &!pci-0000:00:02.0 type: ethernet autoconnect: yes ip: address: - 192.0.2.1/24 - 2001:db8:1::1/64 gateway4: 192.0.2.254 gateway6: 2001:db8:1::fffe dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com state: up The settings specified in the example playbook include the following: match Defines that a condition must be met in order to apply the settings. You can only use this variable with the path option. path Defines the persistent path of a device. You can set it as a fixed path or an expression. Its value can contain modifiers and wildcards. The example applies the settings to devices that match PCI ID 0000:00:0[1-3].0 , but not 0000:00:02.0 . For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Query the Ansible facts of the managed node and verify the active network settings: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 2.9. Configuring an Ethernet connection with a dynamic IP address by using nmstatectl Use the nmstatectl utility to configure an Ethernet connection through the Nmstate API. The Nmstate API ensures that, after setting the configuration, the result matches the configuration file. If anything fails, nmstatectl automatically rolls back the changes to avoid leaving the system in an incorrect state. Prerequisites A physical or virtual Ethernet Network Interface Controller (NIC) exists in the server's configuration. A DHCP server is available in the network. The nmstate package is installed. Procedure Create a YAML file, for example ~/create-ethernet-profile.yml , with the following content: --- interfaces: - name: enp1s0 type: ethernet state: up ipv4: enabled: true auto-dns: true auto-gateway: true auto-routes: true dhcp: true ipv6: enabled: true auto-dns: true auto-gateway: true auto-routes: true autoconf: true dhcp: true These settings define an Ethernet connection profile for the enp1s0 device. The connection retrieves IPv4 addresses, IPv6 addresses, default gateway, routes, DNS servers, and search domains from a DHCP server and IPv6 stateless address autoconfiguration (SLAAC). Optional: You can define the identifier: mac-address and mac-address: <mac_address> properties in the interfaces property to identify the network interface card by its MAC address instead of its name, for example: --- interfaces: - name: <profile_name> type: ethernet identifier: mac-address mac-address: <mac_address> ... Apply the settings to the system: Verification Display the current state in YAML format: Display the IP settings of the NIC: Display the IPv4 default gateway: Display the IPv6 default gateway: Display the DNS settings: If multiple connection profiles are active at the same time, the order of nameserver entries depend on the DNS priority values in these profiles and the connection types. Use the ping utility to verify that this host can send packets to other hosts: Additional resources nmstatectl(8) man page on your system /usr/share/doc/nmstate/examples/ directory 2.10. Configuring an Ethernet connection with a dynamic IP address by using the network RHEL system role with an interface name To connect a Red Hat Enterprise Linux host to an Ethernet network, create a NetworkManager connection profile for the network device. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. You can use the network RHEL system role to configure an Ethernet connection that retrieves its IP addresses, gateways, and DNS settings from a DHCP server and IPv6 stateless address autoconfiguration (SLAAC). With this role you can assign the connection profile to the specified interface name. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. A physical or virtual Ethernet device exists in the servers' configuration. A DHCP server and SLAAC are available in the network. The managed nodes use the NetworkManager service to configure the network. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 interface_name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: up The settings specified in the example playbook include the following: dhcp4: yes Enables automatic IPv4 address assignment from DHCP, PPP, or similar services. auto6: yes Enables IPv6 auto-configuration. By default, NetworkManager uses Router Advertisements. If the router announces the managed flag, NetworkManager requests an IPv6 address and prefix from a DHCPv6 server. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Query the Ansible facts of the managed node and verify that the interface received IP addresses and DNS settings: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 2.11. Configuring an Ethernet connection with a dynamic IP address by using the network RHEL system role with a device path To connect a Red Hat Enterprise Linux host to an Ethernet network, create a NetworkManager connection profile for the network device. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. You can use the network RHEL system role to configure an Ethernet connection that retrieves its IP addresses, gateways, and DNS settings from a DHCP server and IPv6 stateless address autoconfiguration (SLAAC). The role can assign the connection profile to a device based on its path instead of an interface name. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. A physical or virtual Ethernet device exists in the server's configuration. A DHCP server and SLAAC are available in the network. The managed hosts use NetworkManager to configure the network. You know the path of the device. You can display the device path by using the udevadm info /sys/class/net/ <device_name> | grep ID_PATH= command. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: example match: path: - pci-0000:00:0[1-3].0 - &!pci-0000:00:02.0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: up The settings specified in the example playbook include the following: match: path Defines that a condition must be met in order to apply the settings. You can only use this variable with the path option. path: <path_and_expressions> Defines the persistent path of a device. You can set it as a fixed path or an expression. Its value can contain modifiers and wildcards. The example applies the settings to devices that match PCI ID 0000:00:0[1-3].0 , but not 0000:00:02.0 . dhcp4: yes Enables automatic IPv4 address assignment from DHCP, PPP, or similar services. auto6: yes Enables IPv6 auto-configuration. By default, NetworkManager uses Router Advertisements. If the router announces the managed flag, NetworkManager requests an IPv6 address and prefix from a DHCPv6 server. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Query the Ansible facts of the managed node and verify that the interface received IP addresses and DNS settings: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 2.12. Configuring multiple Ethernet interfaces by using a single connection profile by interface name In most cases, one connection profile contains the settings of one network device. However, NetworkManager also supports wildcards when you set the interface name in connection profiles. If a host roams between Ethernet networks with dynamic IP address assignment, you can use this feature to create a single connection profile that you can use for multiple Ethernet interfaces. Prerequisites Multiple physical or virtual Ethernet devices exist in the server's configuration. A DHCP server is available in the network. No connection profile exists on the host. Procedure Add a connection profile that applies to all interface names starting with enp : Verification Display all settings of the single connection profile: 3 indicates that the interface can be active multiple times at a particular moment. The connection profile uses all devices that match the pattern in the match.interface-name parameter and, therefore, the connection profiles have the same Universally Unique Identifier (UUID). Display the status of the connections: Additional resources nmcli(1) man page on your system nm-settings(5) man page 2.13. Configuring a single connection profile for multiple Ethernet interfaces using PCI IDs The PCI ID is a unique identifier of the devices connected to the system. The connection profile adds multiple devices by matching interfaces based on a list of PCI IDs. You can use this procedure to connect multiple device PCI IDs to the single connection profile. Prerequisites Multiple physical or virtual Ethernet devices exist in the server's configuration. A DHCP server is available in the network. No connection profile exists on the host. Procedure Identify the device path. For example, to display the device paths of all interfaces starting with enp , enter : Add a connection profile that applies to all PCI IDs matching the 0000:00:0[7-8].0 expression: Verification Display the status of the connection: To display all settings of the connection profile: This connection profile uses all devices with a PCI ID which match the pattern in the match.path parameter and, therefore, the connection profiles have the same Universally Unique Identifier (UUID). Additional resources nmcli(1) man page on your system nm-settings(5) man page
|
[
"nmcli connection show NAME UUID TYPE DEVICE Wired connection 1 a5eb6490-cc20-3668-81f8-0314a27f3f75 ethernet enp1s0",
"nmcli connection add con-name <connection-name> ifname <device-name> type ethernet",
"nmcli connection modify \"Wired connection 1\" connection.id \"Internal-LAN\"",
"nmcli connection show Internal-LAN connection.interface-name: enp1s0 connection.autoconnect: yes ipv4.method: auto ipv6.method: auto",
"nmcli connection modify Internal-LAN ipv4.method auto",
"nmcli connection modify Internal-LAN ipv4.method manual ipv4.addresses 192.0.2.1/24 ipv4.gateway 192.0.2.254 ipv4.dns 192.0.2.200 ipv4.dns-search example.com",
"nmcli connection modify Internal-LAN ipv6.method auto",
"nmcli connection modify Internal-LAN ipv6.method manual ipv6.addresses 2001:db8:1::fffe/64 ipv6.gateway 2001:db8:1::fffe ipv6.dns 2001:db8:1::ffbb ipv6.dns-search example.com",
"nmcli connection modify <connection-name> <setting> <value>",
"nmcli connection up Internal-LAN",
"ip address show enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:17:b8:b6 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever inet6 2001:db8:1::fffe/64 scope global noprefixroute valid_lft forever preferred_lft forever",
"ip route show default default via 192.0.2.254 dev enp1s0 proto static metric 102",
"ip -6 route show default default via 2001:db8:1::ffee dev enp1s0 proto static metric 102 pref medium",
"cat /etc/resolv.conf search example.com nameserver 192.0.2.200 nameserver 2001:db8:1::ffbb",
"ping <host-name-or-IP-address>",
"nmcli connection show NAME UUID TYPE DEVICE Wired connection 1 a5eb6490-cc20-3668-81f8-0314a27f3f75 ethernet enp1s0",
"nmcli connection edit type ethernet con-name \" <connection-name> \"",
"nmcli connection edit con-name \" <connection-name> \"",
"nmcli> set connection.id Internal-LAN",
"nmcli> print connection.interface-name: enp1s0 connection.autoconnect: yes ipv4.method: auto ipv6.method: auto",
"nmcli> set connection.interface-name enp1s0",
"nmcli> set ipv4.method auto",
"nmcli> ipv4.addresses 192.0.2.1/24 Do you also want to set 'ipv4.method' to 'manual'? [yes]: yes nmcli> ipv4.gateway 192.0.2.254 nmcli> ipv4.dns 192.0.2.200 nmcli> ipv4.dns-search example.com",
"nmcli> set ipv6.method auto",
"nmcli> ipv6.addresses 2001:db8:1::fffe/64 Do you also want to set 'ipv6.method' to 'manual'? [yes]: yes nmcli> ipv6.gateway 2001:db8:1::fffe nmcli> ipv6.dns 2001:db8:1::ffbb nmcli> ipv6.dns-search example.com",
"nmcli> save persistent",
"nmcli> quit",
"ip address show enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:17:b8:b6 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever inet6 2001:db8:1::fffe/64 scope global noprefixroute valid_lft forever preferred_lft forever",
"ip route show default default via 192.0.2.254 dev enp1s0 proto static metric 102",
"ip -6 route show default default via 2001:db8:1::ffee dev enp1s0 proto static metric 102 pref medium",
"cat /etc/resolv.conf search example.com nameserver 192.0.2.200 nameserver 2001:db8:1::ffbb",
"ping <host-name-or-IP-address>",
"nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet unavailable --",
"nmtui",
"ip address show enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:17:b8:b6 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever inet6 2001:db8:1::fffe/64 scope global noprefixroute valid_lft forever preferred_lft forever",
"ip route show default default via 192.0.2.254 dev enp1s0 proto static metric 102",
"ip -6 route show default default via 2001:db8:1::ffee dev enp1s0 proto static metric 102 pref medium",
"cat /etc/resolv.conf search example.com nameserver 192.0.2.200 nameserver 2001:db8:1::ffbb",
"ping <host-name-or-IP-address>",
"ip address show enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:17:b8:b6 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever inet6 2001:db8:1::fffe/64 scope global noprefixroute valid_lft forever preferred_lft forever",
"ip route show default default via 192.0.2.254 dev enp1s0 proto static metric 102",
"ip -6 route show default default via 2001:db8:1::ffee dev enp1s0 proto static metric 102 pref medium",
"cat /etc/resolv.conf search example.com nameserver 192.0.2.200 nameserver 2001:db8:1::ffbb",
"ping <host-name-or-IP-address>",
"nm-connection-editor",
"ip address show enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:17:b8:b6 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever inet6 2001:db8:1::fffe/64 scope global noprefixroute valid_lft forever preferred_lft forever",
"ip route show default default via 192.0.2.254 dev enp1s0 proto static metric 102",
"ip -6 route show default default via 2001:db8:1::ffee dev enp1s0 proto static metric 102 pref medium",
"cat /etc/resolv.conf search example.com nameserver 192.0.2.200 nameserver 2001:db8:1::ffbb",
"ping <host-name-or-IP-address>",
"--- interfaces: - name: enp1s0 type: ethernet state: up ipv4: enabled: true address: - ip: 192.0.2.1 prefix-length: 24 dhcp: false ipv6: enabled: true address: - ip: 2001:db8:1::1 prefix-length: 64 autoconf: false dhcp: false routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.0.2.254 next-hop-interface: enp1s0 - destination: ::/0 next-hop-address: 2001:db8:1::fffe next-hop-interface: enp1s0 dns-resolver: config: search: - example.com server: - 192.0.2.200 - 2001:db8:1::ffbb",
"--- interfaces: - name: <profile_name> type: ethernet identifier: mac-address mac-address: <mac_address>",
"nmstatectl apply ~/create-ethernet-profile.yml",
"nmstatectl show enp1s0",
"ip address show enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:17:b8:b6 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever inet6 2001:db8:1::fffe/64 scope global noprefixroute valid_lft forever preferred_lft forever",
"ip route show default default via 192.0.2.254 dev enp1s0 proto static metric 102",
"ip -6 route show default default via 2001:db8:1::ffee dev enp1s0 proto static metric 102 pref medium",
"cat /etc/resolv.conf search example.com nameserver 192.0.2.200 nameserver 2001:db8:1::ffbb",
"ping <host-name-or-IP-address>",
"managed-node-01.example.com interface=enp1s0 ip_v4=192.0.2.1/24 ip_v6=2001:db8:1::1/64 gateway_v4=192.0.2.254 gateway_v6=2001:db8:1::fffe managed-node-02.example.com interface=enp1s0 ip_v4=192.0.2.2/24 ip_v6=2001:db8:1::2/64 gateway_v4=192.0.2.254 gateway_v6=2001:db8:1::fffe",
"--- - name: Configure the network hosts: managed-node-01.example.com,managed-node-02.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: \"{{ interface }}\" interface_name: \"{{ interface }}\" type: ethernet autoconnect: yes ip: address: - \"{{ ip_v4 }}\" - \"{{ ip_v6 }}\" gateway4: \"{{ gateway_v4 }}\" gateway6: \"{{ gateway_v6 }}\" dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com state: up",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"address\": \"192.0.2.1\", \"alias\": \"enp1s0\", \"broadcast\": \"192.0.2.255\", \"gateway\": \"192.0.2.254\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"netmask\": \"255.255.255.0\", \"network\": \"192.0.2.0\", \"prefix\": \"24\", \"type\": \"ether\" }, \"ansible_default_ipv6\": { \"address\": \"2001:db8:1::1\", \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"prefix\": \"64\", \"scope\": \"global\", \"type\": \"ether\" }, \"ansible_dns\": { \"nameservers\": [ \"192.0.2.1\", \"2001:db8:1::ffbb\" ], \"search\": [ \"example.com\" ] },",
"--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: example match: path: - pci-0000:00:0[1-3].0 - &!pci-0000:00:02.0 type: ethernet autoconnect: yes ip: address: - 192.0.2.1/24 - 2001:db8:1::1/64 gateway4: 192.0.2.254 gateway6: 2001:db8:1::fffe dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com state: up",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"address\": \"192.0.2.1\", \"alias\": \"enp1s0\", \"broadcast\": \"192.0.2.255\", \"gateway\": \"192.0.2.254\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"netmask\": \"255.255.255.0\", \"network\": \"192.0.2.0\", \"prefix\": \"24\", \"type\": \"ether\" }, \"ansible_default_ipv6\": { \"address\": \"2001:db8:1::1\", \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"prefix\": \"64\", \"scope\": \"global\", \"type\": \"ether\" }, \"ansible_dns\": { \"nameservers\": [ \"192.0.2.1\", \"2001:db8:1::ffbb\" ], \"search\": [ \"example.com\" ] },",
"--- interfaces: - name: enp1s0 type: ethernet state: up ipv4: enabled: true auto-dns: true auto-gateway: true auto-routes: true dhcp: true ipv6: enabled: true auto-dns: true auto-gateway: true auto-routes: true autoconf: true dhcp: true",
"--- interfaces: - name: <profile_name> type: ethernet identifier: mac-address mac-address: <mac_address>",
"nmstatectl apply ~/create-ethernet-profile.yml",
"nmstatectl show enp1s0",
"ip address show enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:17:b8:b6 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever inet6 2001:db8:1::fffe/64 scope global noprefixroute valid_lft forever preferred_lft forever",
"ip route show default default via 192.0.2.254 dev enp1s0 proto static metric 102",
"ip -6 route show default default via 2001:db8:1::ffee dev enp1s0 proto static metric 102 pref medium",
"cat /etc/resolv.conf search example.com nameserver 192.0.2.200 nameserver 2001:db8:1::ffbb",
"ping <host-name-or-IP-address>",
"--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 interface_name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: up",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"address\": \"192.0.2.1\", \"alias\": \"enp1s0\", \"broadcast\": \"192.0.2.255\", \"gateway\": \"192.0.2.254\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"netmask\": \"255.255.255.0\", \"network\": \"192.0.2.0\", \"prefix\": \"24\", \"type\": \"ether\" }, \"ansible_default_ipv6\": { \"address\": \"2001:db8:1::1\", \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"prefix\": \"64\", \"scope\": \"global\", \"type\": \"ether\" }, \"ansible_dns\": { \"nameservers\": [ \"192.0.2.1\", \"2001:db8:1::ffbb\" ], \"search\": [ \"example.com\" ] },",
"--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: example match: path: - pci-0000:00:0[1-3].0 - &!pci-0000:00:02.0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: up",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"address\": \"192.0.2.1\", \"alias\": \"enp1s0\", \"broadcast\": \"192.0.2.255\", \"gateway\": \"192.0.2.254\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"netmask\": \"255.255.255.0\", \"network\": \"192.0.2.0\", \"prefix\": \"24\", \"type\": \"ether\" }, \"ansible_default_ipv6\": { \"address\": \"2001:db8:1::1\", \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"prefix\": \"64\", \"scope\": \"global\", \"type\": \"ether\" }, \"ansible_dns\": { \"nameservers\": [ \"192.0.2.1\", \"2001:db8:1::ffbb\" ], \"search\": [ \"example.com\" ] },",
"nmcli connection add con-name \"Wired connection 1\" connection.multi-connect multiple match.interface-name enp* type ethernet",
"nmcli connection show \"Wired connection 1\" connection.id: Wired connection 1 connection.multi-connect: 3 (multiple) match.interface-name: enp*",
"nmcli connection show NAME UUID TYPE DEVICE Wired connection 1 6f22402e-c0cc-49cf-b702-eaf0cd5ea7d1 ethernet enp7s0 Wired connection 1 6f22402e-c0cc-49cf-b702-eaf0cd5ea7d1 ethernet enp8s0 Wired connection 1 6f22402e-c0cc-49cf-b702-eaf0cd5ea7d1 ethernet enp9s0",
"udevadm info /sys/class/net/enp* | grep ID_PATH= E: ID_PATH=pci-0000:07:00.0 E: ID_PATH=pci-0000:08:00.0",
"nmcli connection add type ethernet connection.multi-connect multiple match.path \"pci-0000:07:00.0 pci-0000:08:00.0\" con-name \"Wired connection 1\"",
"nmcli connection show NAME UUID TYPE DEVICE Wired connection 1 9cee0958-512f-4203-9d3d-b57af1d88466 ethernet enp7s0 Wired connection 1 9cee0958-512f-4203-9d3d-b57af1d88466 ethernet enp8s0",
"nmcli connection show \"Wired connection 1\" connection.id: Wired connection 1 connection.multi-connect: 3 (multiple) match.path: pci-0000:07:00.0,pci-0000:08:00.0"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/configuring-an-ethernet-connection_configuring-and-managing-networking
|
Chapter 5. Using Red Hat Quay
|
Chapter 5. Using Red Hat Quay The following steps show you how to use the interface and create new organizations and repositories , and to search and browse existing repositories. Following step 3, you can use the command line interface to interact with the registry, and to push and pull images. Use your browser to access the user interface for the Red Hat Quay registry at http://quay-server.example.com , assuming you have configured quay-server.example.com as your hostname in your /etc/hosts file and in your config.yaml file. Click Create Account and add a user, for example, quayadmin with a password password . From the command line, log in to the registry: USD sudo podman login --tls-verify=false quay-server.example.com Example output Username: quayadmin Password: password Login Succeeded! 5.1. Pushing and pulling images on Red Hat Quay Use the following procedure to push and pull images to your Red Hat Quay registry. Procedure To test pushing and pulling images from the Red Hat Quay registry, first pull a sample image from an external registry: USD sudo podman pull busybox Example output Trying to pull docker.io/library/busybox... Getting image source signatures Copying blob 4c892f00285e done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9 Enter the following command to see the local copy of the image: USD sudo podman images Example output REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/library/busybox latest 22667f53682a 14 hours ago 1.45 MB Enter the following command to tag this image, which prepares the image for pushing it to the registry: USD sudo podman tag docker.io/library/busybox quay-server.example.com/quayadmin/busybox:test Push the image to your registry. Following this step, you can use your browser to see the tagged image in your repository. USD sudo podman push --tls-verify=false quay-server.example.com/quayadmin/busybox:test Example output Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures To test access to the image from the command line, first delete the local copy of the image: USD sudo podman rmi quay-server.example.com/quayadmin/busybox:test Untagged: quay-server.example.com/quayadmin/busybox:test Pull the image again, this time from your Red Hat Quay registry: USD sudo podman pull --tls-verify=false quay-server.example.com/quayadmin/busybox:test Example output Trying to pull quay-server.example.com/quayadmin/busybox:test... Getting image source signatures Copying blob 6ef22a7134ba [--------------------------------------] 0.0b / 0.0b Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9
|
[
"sudo podman login --tls-verify=false quay-server.example.com",
"Username: quayadmin Password: password Login Succeeded!",
"sudo podman pull busybox",
"Trying to pull docker.io/library/busybox Getting image source signatures Copying blob 4c892f00285e done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9",
"sudo podman images",
"REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/library/busybox latest 22667f53682a 14 hours ago 1.45 MB",
"sudo podman tag docker.io/library/busybox quay-server.example.com/quayadmin/busybox:test",
"sudo podman push --tls-verify=false quay-server.example.com/quayadmin/busybox:test",
"Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures",
"sudo podman rmi quay-server.example.com/quayadmin/busybox:test Untagged: quay-server.example.com/quayadmin/busybox:test",
"sudo podman pull --tls-verify=false quay-server.example.com/quayadmin/busybox:test",
"Trying to pull quay-server.example.com/quayadmin/busybox:test Getting image source signatures Copying blob 6ef22a7134ba [--------------------------------------] 0.0b / 0.0b Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/proof_of_concept_-_deploying_red_hat_quay/use-quay-poc
|
Chapter 4. Telemetry data collection
|
Chapter 4. Telemetry data collection The telemetry data collection feature helps in collecting and analyzing the telemetry data to improve your experience with Red Hat Developer Hub. This feature is enabled by default. Important As an administrator, you can disable the telemetry data collection feature based on your needs. For example, in an air-gapped environment, you can disable this feature to avoid needless outbound requests affecting the responsiveness of the RHDH application. For more details, see the Disabling telemetry data collection in RHDH section. Red Hat collects and analyzes the following data: Events of page visits and clicks on links or buttons. System-related information, for example, locale, timezone, user agent including browser and OS details. Page-related information, for example, title, category, extension name, URL, path, referrer, and search parameters. Anonymized IP addresses, recorded as 0.0.0.0 . Anonymized username hashes, which are unique identifiers used solely to identify the number of unique users of the RHDH application. With RHDH, you can customize the telemetry data collection feature and the telemetry Segment source configuration based on your needs. 4.1. Disabling telemetry data collection in RHDH To disable telemetry data collection, you must disable the analytics-provider-segment plugin either using the Helm Chart or the Red Hat Developer Hub Operator configuration. 4.1.1. Disabling telemetry data collection using the Helm Chart You can disable the telemetry data collection feature by using the Helm Chart. Prerequisites You have logged in as an administrator in the OpenShift Container Platform web console. You have installed Red Hat Developer Hub on OpenShift Container Platform using the Helm Chart. For more details, see the Installing Red Hat Developer Hub on OpenShift Container Platform with the Helm chart section. Procedure In the Developer perspective of the OpenShift Container Platform web console, go to the Helm view to see the list of Helm releases. Click the overflow menu on the Helm release that you want to use and select Upgrade . Note You can also create a new Helm release by clicking the Create button and edit the configuration to disable telemetry. Use either the Form view or YAML view to edit the Helm configuration: Using Form view Expand Root Schema global Dynamic plugins configuration. List of dynamic plugins that should be installed in the backstage application . Click the Add list of dynamic plugins that should be installed in the backstage application. link. Perform one of the following steps: If you have not configured the plugin, add the following value in the Package specification of the dynamic plugin to install. It should be usable by the npm pack command. field: ./dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment If you have configured the plugin, find the Package specification of the dynamic plugin to install. It should be usable by the npm pack command. field with the ./dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment value. Select the Disable the plugin checkbox. Click Upgrade . Using YAML view Perform one of the following steps: If you have not configured the plugin, add the following YAML code in your values.yaml Helm configuration file: # ... global: dynamic: plugins: - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment' disabled: true # ... If you have configured the plugin, search it in your Helm configuration and set the value of the plugins.disabled parameter to true . Click Upgrade . 4.1.2. Disabling telemetry data collection using the Operator You can disable the telemetry data collection feature by using the Operator. Prerequisites You have logged in as an administrator in the OpenShift Container Platform web console. You have installed Red Hat Developer Hub on OpenShift Container Platform using the Operator. For more details, see the Installing Red Hat Developer Hub on OpenShift Container Platform with the Operator section. Procedure Perform one of the following steps: If you have created the dynamic-plugins-rhdh ConfigMap file and not configured the analytics-provider-segment plugin, add the plugin to the list of plugins and set its plugins.disabled parameter to true . If you have created the dynamic-plugins-rhdh ConfigMap file and configured the analytics-provider-segment plugin, search the plugin in the list of plugins and set its plugins.disabled parameter to true . If you have not created the ConfigMap file, create it with the following YAML code: kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment' disabled: true Set the value of the dynamicPluginsConfigMapName parameter to the name of the ConfigMap file in your Backstage custom resource: # ... spec: application: dynamicPluginsConfigMapName: dynamic-plugins-rhdh # ... Save the configuration changes. 4.2. Enabling telemetry data collection in RHDH The telemetry data collection feature is enabled by default. However, if you have disabled the feature and want to re-enable it, you must enable the analytics-provider-segment plugin either by using the Helm Chart or the Red Hat Developer Hub Operator configuration. 4.2.1. Enabling telemetry data collection using the Helm Chart You can enable the telemetry data collection feature by using the Helm Chart. Prerequisites You have logged in as an administrator in the OpenShift Container Platform web console. You have installed Red Hat Developer Hub on OpenShift Container Platform using the Helm Chart. For more details, see the Installing Red Hat Developer Hub on OpenShift Container Platform with the Helm chart section. Procedure In the Developer perspective of the OpenShift Container Platform web console, go to the Helm view to see the list of Helm releases. Click the overflow menu on the Helm release that you want to use and select Upgrade . Note You can also create a new Helm release by clicking the Create button and edit the configuration to enable telemetry. Use either the Form view or YAML view to edit the Helm configuration: Using Form view Expand Root Schema global Dynamic plugins configuration. List of dynamic plugins that should be installed in the backstage application . Click the Add list of dynamic plugins that should be installed in the backstage application. link. Perform one of the following steps: If you have not configured the plugin, add the following value in the Package specification of the dynamic plugin to install. It should be usable by the npm pack command. field: ./dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment If you have configured the plugin, find the Package specification of the dynamic plugin to install. It should be usable by the npm pack command. field with the ./dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment value. Clear the Disable the plugin checkbox. Click Upgrade . Using YAML view Perform one of the following steps: If you have not configured the plugin, add the following YAML code in your Helm configuration file: # ... global: dynamic: plugins: - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment' disabled: false # ... If you have configured the plugin, search it in your Helm configuration and set the value of the plugins.disabled parameter to false . Click Upgrade . 4.2.2. Enabling telemetry data collection using the Operator You can enable the telemetry data collection feature by using the Operator. Prerequisites You have logged in as an administrator in the OpenShift Container Platform web console. You have installed Red Hat Developer Hub on OpenShift Container Platform using the Operator. For more details, see the Installing Red Hat Developer Hub on OpenShift Container Platform with the Operator section. Procedure Perform one of the following steps: If you have created the dynamic-plugins-rhdh ConfigMap file and not configured the analytics-provider-segment plugin, add the plugin to the list of plugins and set its plugins.disabled parameter to false . If you have created the dynamic-plugins-rhdh ConfigMap file and configured the analytics-provider-segment plugin, search the plugin in the list of plugins and set its plugins.disabled parameter to false . If you have not created the ConfigMap file, create it with the following YAML code: kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment' disabled: false Set the value of the dynamicPluginsConfigMapName parameter to the name of the ConfigMap file in your Backstage custom resource: # ... spec: application: dynamicPluginsConfigMapName: dynamic-plugins-rhdh # ... Save the configuration changes. 4.3. Customizing telemetry Segment source The analytics-provider-segment plugin sends the collected telemetry data to Red Hat by default. However, you can configure a new Segment source that receives telemetry data based on your needs. For configuration, you need a unique Segment write key that points to the Segment source. Note By configuring a new Segment source, you can collect and analyze the same set of data that is mentioned in the Telemetry data collection section. You might also require to create your own telemetry data collection notice for your application users. 4.3.1. Customizing telemetry Segment source using the Helm Chart You can configure integration with your Segment source by using the Helm Chart. Prerequisites You have logged in as an administrator in the OpenShift Container Platform web console. You have installed Red Hat Developer Hub on OpenShift Container Platform using the Helm Chart. For more details, see the Installing Red Hat Developer Hub on OpenShift Container Platform with the Helm chart section. Procedure In the Developer perspective of the OpenShift Container Platform web console, go to the Helm view to see the list of Helm releases. Click the overflow menu on the Helm release that you want to use and select Upgrade . Use either the Form view or YAML view to edit the Helm configuration: Using Form view Expand Root Schema Backstage Chart Schema Backstage Parameters Backstage container environment variables . Click the Add Backstage container environment variables link. Enter the name and value of the Segment key. Click Upgrade . Using YAML view Add the following YAML code in your Helm configuration file: # ... upstream: backstage: extraEnvVars: - name: SEGMENT_WRITE_KEY value: <segment_key> 1 # ... 1 Replace <segment_key> with a unique identifier for your Segment source. Click Upgrade . 4.3.2. Customizing telemetry Segment source using the Operator You can configure integration with your Segment source by using the Operator. Prerequisites You have logged in as an administrator in the OpenShift Container Platform web console. You have installed Red Hat Developer Hub on OpenShift Container Platform using the Operator. For more details, see the Installing Red Hat Developer Hub on OpenShift Container Platform with the Operator section. Procedure Add the following YAML code in your Backstage custom resource (CR): # ... spec: application: extraEnvs: envs: - name: SEGMENT_WRITE_KEY value: <segment_key> 1 # ... 1 Replace <segment_key> with a unique identifier for your Segment source. Save the configuration changes.
|
[
"global: dynamic: plugins: - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment' disabled: true",
"kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment' disabled: true",
"spec: application: dynamicPluginsConfigMapName: dynamic-plugins-rhdh",
"global: dynamic: plugins: - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment' disabled: false",
"kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment' disabled: false",
"spec: application: dynamicPluginsConfigMapName: dynamic-plugins-rhdh",
"upstream: backstage: extraEnvVars: - name: SEGMENT_WRITE_KEY value: <segment_key> 1",
"spec: application: extraEnvs: envs: - name: SEGMENT_WRITE_KEY value: <segment_key> 1"
] |
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/administration_guide_for_red_hat_developer_hub/assembly-rhdh-telemetry
|
4.3.10. Combining Volume Groups
|
4.3.10. Combining Volume Groups Two combine two volume groups into a single volume group, use the vgmerge command. You can merge an inactive "source" volume with an active or an inactive "destination" volume if the physical extent sizes of the volume are equal and the physical and logical volume summaries of both volume groups fit into the destination volume groups limits. The following command merges the inactive volume group my_vg into the active or inactive volume group databases giving verbose runtime information.
|
[
"vgmerge -v databases my_vg"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/vg_combine
|
Chapter 1. Overview
|
Chapter 1. Overview The RHEL for SAP Applications or RHEL for SAP Solutions subscriptions provide additional RHEL repositories that contain additional RPM packages required for running SAP applications like S/4HANA, SAP HANA or SAP NetWeaver based SAP products (like ERP or CRM) on RHEL and to use additional features provided by Red Hat specifically for SAP, like the HA solutions for managing S/4HANA, SAP HANA and SAP NetWeaver . Red Hat offers a new RHEL minor release every 6 months. A fix for a problem which has been reported for a given RHEL minor release might be available in a package which is part of one of the following RHEL minor releases. For customers who have to, or want to, keep their system(s) on a certain RHEL minor release for more than 6 months, Red Hat is offering Red Hat Enterprise Linux Extended Maintenance as an Extended Update Support (EUS) Add-On or as Update Services for SAP Solutions (E4S) . These repositories receive important fixes for up to two years (EUS) or four years (E4S) after the release of the corresponding RHEL minor release. The EUS and E4S repositories are only available for certain RHEL minor releases. See the Red Hat Enterprise Linux Life Cycle page for more information on the RHEL release schedule. SAP validates SAP NetWeaver/SAP ABAP Application Platform once per RHEL major release (e. g. RHEL 9), so you can run it on any RHEL minor release once it has been validated on the corresponding RHEL major release (e.g. 9.0, 9.1, 9.2, ... ). In contrast, SAP validates SAP HANA only for specific RHEL minor releases - typically for those RHEL minor releases for which E4S repositories are available. This document provides: instructions for registering your RHEL system to use RHEL for SAP Applications or RHEL for SAP Solutions subscriptions an overview of the repositories that must be enabled based on the combination of SAP products and the RHEL release, and the procedure for enabling the repositories. Note Always verify with SAP and with your hardware partner or infrastructure provider if the SAP product you are planning to use is supported for the RHEL release that is going to be used. When using the EUS or E4S repos, the targeted RHEL minor release must be set via subscription-manager, to ensure that the system does not get updated to a higher RHEL minor release than desired. This document applies only to on-premise systems and to "bring your own subscription" (BYOS) systems on any public cloud platform using Red Hat Subscription Manager (RHSM). This document is not applicable to "pay as you go" (PAYG) instances using RHUI on public cloud platforms. In the case of PAYG images, the repositories are defined by the preinstalled RHUI client rpm and should not be configured manually. For the RHEL for SAP Solutions with HA and US subscription, the following E4S repositories will be present on the virtual machine: AppStream, BaseOS, High Availability, SAP NetWeaver and SAP Solutions. For the RHEL for SAP Applications subscription, the following EUS repositories will be present on the virtual machine: AppStream, BaseOS, and SAP NetWeaver. For a final minor release of a given major release, for example 7.9, 8.10 and 9.10, the non EUS/E4S repositories will be, for RHEL for SAP Solutions with HA and US: Common, Extras, High Availability, SAP, SAP HANA and Server, and for RHEL for SAP Applications: Common, Extras, SAP and Server. Every cloud provider also has custom cloud-specific repositories.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/rhel_for_sap_subscriptions_and_repositories/asmb_overview_rhel-for-sap-subscriptions-and-repositories-9
|
Appendix B. Troubleshooting
|
Appendix B. Troubleshooting The troubleshooting information in the following sections might be helpful when diagnosing issues after the installation process. The following sections are for all supported architectures. However, if an issue is for a particular architecture, it is specified at the start of the section. B.1. Resuming an interrupted download attempt You can resume an interrupted download using the curl command. Prerequisite You have navigated to the Product Downloads section of the Red Hat Customer Portal at https://access.redhat.com/downloads , and selected the required variant, version, and architecture. You have right-clicked on the required ISO file, and selected Copy Link Location to copy the URL of the ISO image file to your clipboard. Procedure Download the ISO image from the new link. Add the --continue-at - option to automatically resume the download: Use a checksum utility such as sha256sum to verify the integrity of the image file after the download finishes: Compare the output with reference checksums provided on the Red Hat Enterprise Linux Product Download web page. Example B.1. Resuming an interrupted download attempt The following is an example of a curl command for a partially downloaded ISO image: B.2. Disks are not detected If the installation program cannot find a writable storage device to install to, it returns the following error message in the Installation Destination window: No disks detected. Please shut down the computer, connect at least one disk, and restart to complete installation. Check the following items: Your system has at least one storage device attached. If your system uses a hardware RAID controller; verify that the controller is properly configured and working as expected. See your controller's documentation for instructions. If you are installing into one or more iSCSI devices and there is no local storage present on the system, verify that all required LUNs are presented to the appropriate Host Bus Adapter (HBA). If the error message is still displayed after rebooting the system and starting the installation process, the installation program failed to detect the storage. In many cases the error message is a result of attempting to install on an iSCSI device that is not recognized by the installation program. In this scenario, you must perform a driver update before starting the installation. Check your hardware vendor's website to determine if a driver update is available. For more general information about driver updates, see the Updating drivers during installation . You can also consult the Red Hat Hardware Compatibility List, available at https://access.redhat.com/ecosystem/search/#/category/Server . B.3. Cannot boot with a RAID card If you cannot boot your system after the installation, you might need to reinstall and repartition your system's storage. Some BIOS types do not support booting from RAID cards. After you finish the installation and reboot the system for the first time, a text-based screen displays the boot loader prompt (for example, grub> ) and a flashing cursor might be displayed. If this is the case, you must repartition your system and move your /boot partition and the boot loader outside of the RAID array. The /boot partition and the boot loader must be on the same drive. Once these changes have been made, you should be able to finish your installation and boot the system properly. B.4. Graphical boot sequence is not responding When rebooting your system for the first time after installation, the system might be unresponsive during the graphical boot sequence. If this occurs, a reset is required. In this scenario, the boot loader menu is displayed successfully, but selecting any entry and attempting to boot the system results in a halt. This usually indicates that there is a problem with the graphical boot sequence. To resolve the issue, you must disable the graphical boot by temporarily altering the setting at boot time before changing it permanently. Procedure: Disabling the graphical boot temporarily Start your system and wait until the boot loader menu is displayed. If you set your boot timeout period to 0 , press the Esc key to access it. From the boot loader menu, use your cursor keys to highlight the entry you want to boot. Press the Tab key on BIOS-based systems or the e key on UEFI-based systems to edit the selected entry options. In the list of options, find the kernel line - that is, the line beginning with the keyword linux . On this line, locate and delete rhgb . Press F10 or Ctrl + X to boot your system with the edited options. If the system started successfully, you can log in normally. However, if you do not disable graphical boot permanently, you must perform this procedure every time the system boots. Procedure: Disabling the graphical boot permanently Log in to the root account on your system. Use the grubby tool to find the default GRUB kernel: Use the grubby tool to remove the rhgb boot option from the default kernel in your GRUB configuration. For example: Reboot the system. The graphical boot sequence is no longer used. If you want to enable the graphical boot sequence, follow the same procedure, replacing the --remove-args="rhgb" parameter with the --args="rhgb" parameter. This restores the rhgb boot option to the default kernel in your GRUB configuration. B.5. X server fails after log in An X server is a program in the X Window System that runs on local machines, that is, the computers used directly by users. X server handles all access to the graphics cards, display screens and input devices, typically a keyboard and mouse on those computers. The X Window System, often referred to as X, is a complete, cross-platform and free client-server system for managing GUIs on single computers and on networks of computers. The client-server model is an architecture that divides the work between two separate but linked applications, referred to as clients and servers.* If X server crashes after login, one or more of the file systems might be full. To troubleshoot the issue, execute the following command: The output verifies which partition is full - in most cases, the problem is on the /home partition. The following is a sample output of the df command: In the example, you can see that the /home partition is full, which causes the failure. Remove any unwanted files. After you free up some disk space, start X using the startx command. For additional information about df and an explanation of the options available, such as the -h option used in this example, see the df(1) man page on your system. *Source: http://www.linfo.org/x_server.html B.6. RAM is not recognized In some scenarios, the kernel does not recognize all memory (RAM), which causes the system to use less memory than is installed. If the total amount of memory that your system reports does not match your expectations, it is likely that at least one of your memory modules is faulty. On BIOS-based systems, you can use the Memtest86+ utility to test your system's memory. Some hardware configurations have part of the system's RAM reserved, and as a result, it is unavailable to the system. Some laptop computers with integrated graphics cards reserve a portion of memory for the GPU. For example, a laptop with 4 GiB of RAM and an integrated Intel graphics card shows roughly 3.7 GiB of available memory. Additionally, the kdump crash kernel dumping mechanism, which is enabled by default on most Red Hat Enterprise Linux systems, reserves some memory for the secondary kernel used in case of a primary kernel failure. This reserved memory is not displayed as available. Use this procedure to manually set the amount of memory. Procedure Check the amount of memory that your system currently reports in MiB: Reboot your system and wait until the boot loader menu is displayed. If your boot timeout period is set to 0 , press the Esc key to access the menu. From the boot loader menu, use your cursor keys to highlight the entry you want to boot, and press the Tab key on BIOS-based systems or the e key on UEFI-based systems to edit the selected entry options. In the list of options, find the kernel line: that is, the line beginning with the keyword linux . Append the following option to the end of this line: Replace xx with the amount of RAM you have in MiB. Press F10 or Ctrl + X to boot your system with the edited options. Wait for the system to boot, log in, and open a command line. Check the amount of memory that your system reports in MiB: If the total amount of RAM displayed by the command now matches your expectations, make the change permanent: B.7. System is displaying signal 11 errors A signal 11 error, commonly known as a segmentation fault, means that a program accessed a memory location that it was not assigned. A signal 11 error can occur due to a bug in one of the software programs that are installed, or faulty hardware. If you receive a signal 11 error during the installation process, verify that you are using the most recent installation images and prompt the installation program to verify them to ensure they are not corrupt. For more information, see Verifying Boot media . Faulty installation media (such as an improperly burned or scratched optical disk) are a common cause of signal 11 errors. Verify the integrity of the installation media before every installation. For information about obtaining the most recent installation media, refer to the Product Downloads page. To perform a media check before the installation starts, append the rd.live.check boot option at the boot menu. If you performed a media check without any errors and you still have issues with segmentation faults, it usually indicates that your system encountered a hardware error. In this scenario, the problem is most likely in the system's memory (RAM). This can be a problem even if you previously used a different operating system on the same computer without any errors. Note For AMD and Intel 64-bit and 64-bit ARM architectures: On BIOS-based systems, you can use the Memtest86+ memory testing module included on the installation media to perform a thorough test of your system's memory. For more information, see Detecting memory faults using the Memtest86 application . Other possible causes are beyond this document's scope. Consult your hardware manufacturer's documentation and also see the Red Hat Hardware Compatibility List, available online at https://access.redhat.com/ecosystem/search/#/category/Server . B.8. Unable to IPL from network storage space on IBM Power Systems If you experience difficulties when trying to IPL from Network Storage Space (*NWSSTG), it is most likely due to a missing PReP partition. In this scenario, you must reinstall the system and create this partition during the partitioning phase or in the Kickstart file. B.9. Using XDMCP There are scenarios where you have installed the X Window System and want to log in to your Red Hat Enterprise Linux system using a graphical login manager. Use this procedure to enable the X Display Manager Control Protocol (XDMCP) and remotely log in to a desktop environment from any X-compatible client, such as a network-connected workstation or X11 terminal. Note XDMCP is not supported by the Wayland protocol. Procedure Open the /etc/gdm/custom.conf configuration file in a plain text editor such as vi or nano . In the custom.conf file, locate the section starting with [xdmcp] . In this section, add the following line: If you are using XDMCP, ensure that WaylandEnable=false is present in the /etc/gdm/custom.conf file. Save the file and exit the text editor. Restart the X Window System. To do this, either reboot the system, or restart the GNOME Display Manager using the following command as root: Warning Restarting the gdm service terminates all currently running GNOME sessions of all desktop users who are logged in. This might result in users losing unsaved data. Wait for the login prompt and log in using your user name and password. The X Window System is now configured for XDMCP. You can connect to it from another workstation (client) by starting a remote X session using the X command on the client workstation. For example: Replace address with the host name of the remote X11 server. The command connects to the remote X11 server using XDMCP and displays the remote graphical login screen on display :1 of the X11 server system (usually accessible by pressing Ctrl-Alt-F8 ). You can also access remote desktop sessions using a nested X11 server, which opens the remote desktop as a window in your current X11 session. You can use Xnest to open a remote desktop nested in a local X11 session. For example, run Xnest using the following command, replacing address with the host name of the remote X11 server: Additional resources X Window System documentation B.10. Using rescue mode The installation program's rescue mode is a minimal Linux environment that can be booted from the Red Hat Enterprise Linux DVD or other boot media. It contains command-line utilities for repairing a wide variety of issues. Rescue mode can be accessed from the Troubleshooting menu of the boot menu. In this mode, you can mount file systems as read-only, blacklist or add a driver provided on a driver disc, install or upgrade system packages, or manage partitions. Note The installation program's rescue mode is different from rescue mode (an equivalent to single-user mode) and emergency mode, which are provided as parts of the systemd system and service manager. To boot into rescue mode, you must be able to boot the system using one of the Red Hat Enterprise Linux boot media, such as a minimal boot disc or USB drive, or a full installation DVD. Important Advanced storage, such as iSCSI or zFCP devices, must be configured either using dracut boot options such as rd.zfcp= or root=iscsi: options , or in the CMS configuration file on 64-bit IBM Z. It is not possible to configure these storage devices interactively after booting into rescue mode. For information about dracut boot options, see the dracut.cmdline(7) man page on your system. B.10.1. Booting into rescue mode This procedure describes how to boot into rescue mode. Procedure Boot the system from either minimal boot media, or a full installation DVD or USB drive, and wait for the boot menu to be displayed. From the boot menu, either select Troubleshooting > Rescue a Red Hat Enterprise Linux system option, or append the inst.rescue option to the boot command line. To enter the boot command line, press the Tab key on BIOS-based systems or the e key on UEFI-based systems. Optional: If your system requires a third-party driver provided on a driver disc to boot, append the inst.dd=driver_name to the boot command line: Optional: If a driver that is part of the Red Hat Enterprise Linux distribution prevents the system from booting, append the modprobe.blacklist= option to the boot command line: Press Enter (BIOS-based systems) or Ctrl + X (UEFI-based systems) to boot the modified option. Wait until the following message is displayed: If you select 1 , the installation program attempts to mount your file system under the directory /mnt/sysroot/ . You are notified if it fails to mount a partition. If you select 2 , it attempts to mount your file system under the directory /mnt/sysroot/ , but in read-only mode. If you select 3 , your file system is not mounted. For the system root, the installer supports two mount points /mnt/sysimage and /mnt/sysroot . The /mnt/sysroot path is used to mount / of the target system. Usually, the physical root and the system root are the same, so /mnt/sysroot is attached to the same file system as /mnt/sysimage . The only exceptions are rpm-ostree systems, where the system root changes based on the deployment. Then, /mnt/sysroot is attached to a subdirectory of /mnt/sysimage . Use /mnt/sysroot for chroot. Select 1 to continue. Once your system is in rescue mode, a prompt appears on VC (virtual console) 1 and VC 2. Use the Ctrl+Alt+F1 key combination to access VC 1 and Ctrl+Alt+F2 to access VC 2: Even if your file system is mounted, the default root partition while in rescue mode is a temporary root partition, not the root partition of the file system used during normal user mode ( multi-user.target or graphical.target ). If you selected to mount your file system and it mounted successfully, you can change the root partition of the rescue mode environment to the root partition of your file system by executing the following command: This is useful if you need to run commands, such as rpm , that require your root partition to be mounted as / . To exit the chroot environment, type exit to return to the prompt. If you selected 3 , you can still try to mount a partition or LVM2 logical volume manually inside rescue mode by creating a directory, such as /directory/ , and typing the following command: In the above command, /directory/ is the directory that you created and /dev/mapper/VolGroup00-LogVol02 is the LVM2 logical volume you want to mount. If the partition is a different type than XFS, replace the xfs string with the correct type (such as ext4). If you do not know the names of all physical partitions, use the following command to list them: If you do not know the names of all LVM2 physical volumes, volume groups, or logical volumes, use the pvdisplay , vgdisplay or lvdisplay commands. B.10.2. Using an SOS report in rescue mode The sosreport command-line utility collects configuration and diagnostic information, such as the running kernel version, loaded modules, and system and service configuration files from the system. The utility output is stored in a tar archive in the /var/tmp/ directory. The sosreport utility is useful for analyzing system errors and troubleshooting. Use this procedure to capture an sosreport output in rescue mode. Prerequisites You have booted into rescue mode. You have mounted the installed system / (root) partition in read-write mode. You have contacted Red Hat Support about your case and received a case number. Procedure Change the root directory to the /mnt/sysroot/ directory: Execute sosreport to generate an archive with system configuration and diagnostic information: sosreport prompts you to enter your name and the case number you received from Red Hat Support. Use only letters and numbers because adding any of the following characters or spaces could render the report unusable: # % & { } \ < > > * ? / USD ~ ' " : @ + ` | = Optional: If you want to transfer the generated archive to a new location using the network, it is necessary to have a network interface configured. In this scenario, use the dynamic IP addressing as no other steps required. However, when using static addressing, enter the following command to assign an IP address (for example 10.13.153.64/23) to a network interface, for example dev eth0: Exit the chroot environment: Store the generated archive in a new location, from where it can be easily accessible: For transferring the archive through the network, use the scp utility: Additional resources What is an sosreport and how to create one in Red Hat Enterprise Linux? (Red Hat Knowledgebase) How to generate sosreport from the rescue environment (Red Hat Knowledgebase) How do I make sosreport write to an alternative location? (Red Hat Knowledgebase) Sosreport fails. What data should I provide in its place? (Red Hat Knowledgebase) B.10.3. Reinstalling the GRUB boot loader In some scenarios, the GRUB boot loader is mistakenly deleted, corrupted, or replaced by other operating systems. In that case, reinstall GRUB on the master boot record (MBR) on AMD64 and Intel 64 systems with BIOS. Prerequisites You have booted into rescue mode. You have mounted the installed system / (root) partition in read-write mode. You have mounted the /boot mount point in read-write mode. Procedure Change the root partition: Reinstall the GRUB boot loader, where the install_device block device was installed: Important Running the grub2-install command could lead to the machine being unbootable if all the following conditions apply: The system is an AMD64 or Intel 64 with Extensible Firmware Interface (EFI). Secure Boot is enabled. After you run the grub2-install command, you cannot boot the AMD64 or Intel 64 systems that have Extensible Firmware Interface (EFI) and Secure Boot enabled. This issue occurs because the grub2-install command installs an unsigned GRUB image that boots directly instead of using the shim application. When the system boots, the shim application validates the image signature, which when not found fails to boot the system. Reboot the system. B.10.4. Using yum to add or remove a driver Missing or malfunctioning drivers cause problems when booting the system. Rescue mode provides an environment in which you can add or remove a driver even when the system fails to boot. Wherever possible, use the yum package manager to remove malfunctioning drivers or to add updated or missing drivers. Important When you install a driver from a driver disc, the driver disc updates all initramfs images on the system to use this driver. If a problem with a driver prevents a system from booting, you cannot rely on booting the system from another initramfs image. B.10.4.1. Adding a driver using yum Use this procedure to add a driver. Prerequisites You have booted into rescue mode. You have mounted the installed system in read-write mode. Procedure Make the RPM package that contains the driver available. For example, mount a CD or USB flash drive and copy the RPM package to a location of your choice under /mnt/sysroot/ , for example: /mnt/sysroot/root/drivers/ . Change the root directory to /mnt/sysroot/ : Use the yum install command to install the driver package. For example, run the following command to install the xorg-x11-drv-wacom driver package from /root/drivers/ : Note The /root/drivers/ directory in this chroot environment is the /mnt/sysroot/root/drivers/ directory in the original rescue environment. Exit the chroot environment: B.10.4.2. Removing a driver using yum Use this procedure to remove a driver. Prerequisites You have booted into rescue mode. You have mounted the installed system in read-write mode. Procedure Change the root directory to the /mnt/sysroot/ directory: Use the yum remove command to remove the driver package. For example, to remove the xorg-x11-drv-wacom driver package, run: Exit the chroot environment: If you cannot remove a malfunctioning driver for some reason, you can instead blocklist the driver so that it does not load at boot time. When you have finished adding and removing drivers, reboot the system. B.11. ip= boot option returns an error Using the ip= boot option format ip=[ip address] for example, ip=192.168.1.1 returns the error message Fatal for argument 'ip=[insert ip here]'\n sorry, unknown value [ip address] refusing to continue . In releases of Red Hat Enterprise Linux, the boot option format was: However, in Red Hat Enterprise Linux 8, the boot option format is: To resolve the issue, use the format: ip=ip::gateway:netmask:hostname:interface:none where: ip specifies the client ip address. You can specify IPv6 addresses in square brackets, for example, [2001:DB8::1] . gateway is the default gateway. IPv6 addresses are also accepted. netmask is the netmask to be used. This can be either a full netmask, for example, 255.255.255.0, or a prefix, for example, 64 . hostname is the host name of the client system. This parameter is optional. Additional resources Network boot options B.12. Cannot boot into the graphical installation on iLO or iDRAC devices The graphical installer for a remote ISO installation on iLO or iDRAC devices may not be available due to a slow internet connection. To proceed with the installation in this case, you can choose one of the following methods: Avoid the timeout. To do so: Press the Tab key in case of BIOS usage, or the e key in case of UEFI usage when booting from an installation media. That will allow you to modify the kernel command line arguments. To proceed with the installation, append the rd.live.ram=1 and press Enter in case of BIOS usage, or Ctrl+x in case of UEFI usage. This might take longer to load the installation program. Another option to extend the loading time for the graphical installer is to set the inst.xtimeout kernel argument in seconds. You can install the system in text mode. For more details, see Installing RHEL8 in text mode . In the remote management console, such as iLO or iDRAC, instead of a local media source, use the direct URL to the installation ISO file from the Download center on the Red Hat Customer Portal. You must be logged in to access this section. B.13. Rootfs image is not initramfs If you get the following message on the console during booting the installer, the transfer of the installer initrd.img might have had errors: To resolve this issue, download initrd again or run the sha256sum with initrd.img and compare it with the checksum stored in the .treeinfo file on the installation medium, for example, To view the checksum in .treeinfo : Despite having correct initrd.img , if you get the following kernel messages during booting the installer, often a boot parameter is missing or mis-spelled, and the installer could not load stage2 , typically referred to by the inst.repo= parameter, providing the full installer initial ramdisk for its in-memory root file system: To resolve this issue, check if the installation source specified is correct on the kernel command line ( inst.repo= ) or in the kickstart file the network configuration is specified on the kernel command line (if the installation source is specified as network) the network installation source is accessible from another system
|
[
"curl --output directory-path/filename.iso 'new_copied_link_location' --continue-at -",
"sha256sum rhel-x.x-x86_64-dvd.iso `85a...46c rhel-x.x-x86_64-dvd.iso`",
"curl --output _rhel-x.x-x86_64-dvd.iso 'https://access.cdn.redhat.com//content/origin/files/sha256/85/85a...46c/rhel-x.x-x86_64-dvd.iso?_auth =141...963' --continue-at -",
"grubby --default-kernel /boot/vmlinuz-4.18.0-94.el8.x86_64",
"grubby --remove-args=\"rhgb\" --update-kernel /boot/vmlinuz-4.18.0-94.el8.x86_64",
"df -h",
"Filesystem Size Used Avail Use% Mounted on devtmpfs 396M 0 396M 0% /dev tmpfs 411M 0 411M 0% /dev/shm tmpfs 411M 6.7M 405M 2% /run tmpfs 411M 0 411M 0% /sys/fs/cgroup /dev/mapper/rhel-root 17G 4.1G 13G 25% / /dev/sda1 1014M 173M 842M 17% /boot tmpfs 83M 20K 83M 1% /run/user/42 tmpfs 83M 84K 83M 1% /run/user/1000 /dev/dm-4 90G 90G 0 100% /home",
"free -m",
"mem= xx M",
"free -m",
"grubby --update-kernel=ALL --args=\"mem= xx M\"",
"Enable=true",
"systemctl restart gdm.service",
"X :1 -query address",
"Xnest :1 -query address",
"inst.rescue inst.dd=driver_name",
"inst.rescue modprobe.blacklist=driver_name",
"The rescue environment will now attempt to find your Linux installation and mount it under the directory: /mnt/sysroot/. You can then make any changes required to your system. Choose 1 to proceed with this step. You can choose to mount your file systems read-only instead of read-write by choosing 2 . If for some reason this process does not work choose 3 to skip directly to a shell. 1) Continue 2) Read-only mount 3) Skip to shell 4) Quit (Reboot)",
"sh-4.2#",
"sh-4.2# chroot /mnt/sysroot",
"sh-4.2# mount -t xfs /dev/mapper/VolGroup00-LogVol02 /directory",
"sh-4.2# fdisk -l",
"sh-4.2# chroot /mnt/sysroot/",
"sh-4.2# sosreport",
"bash-4.2# ip addr add 10.13.153.64/23 dev eth0",
"sh-4.2# exit",
"sh-4.2# cp /mnt/sysroot/var/tmp/sosreport new_location",
"sh-4.2# scp /mnt/sysroot/var/tmp/sosreport username@hostname:sosreport",
"sh-4.2# chroot /mnt/sysroot/",
"sh-4.2# /sbin/grub2-install install_device",
"sh-4.2# chroot /mnt/sysroot/",
"sh-4.2# yum install /root/drivers/xorg-x11-drv-wacom-0.23.0-6.el7.x86_64.rpm",
"sh-4.2# exit",
"sh-4.2# chroot /mnt/sysroot/",
"sh-4.2# yum remove xorg-x11-drv-wacom",
"sh-4.2# exit",
"ip=192.168.1.15 netmask=255.255.255.0 gateway=192.168.1.254 nameserver=192.168.1.250 hostname=myhost1",
"ip=192.168.1.15::192.168.1.254:255.255.255.0:myhost1::none: nameserver=192.168.1.250",
"inst.xtimeout= N",
"[ ...] rootfs image is not initramfs",
"sha256sum dvd/images/pxeboot/initrd.img fdb1a70321c06e25a1ed6bf3d8779371b768d5972078eb72b2c78c925067b5d8 dvd/images/pxeboot/initrd.img",
"grep sha256 dvd/.treeinfo images/efiboot.img = sha256: d357d5063b96226d643c41c9025529554a422acb43a4394e4ebcaa779cc7a917 images/install.img = sha256: 8c0323572f7fc04e34dd81c97d008a2ddfc2cfc525aef8c31459e21bf3397514 images/pxeboot/initrd.img = sha256: fdb1a70321c06e25a1ed6bf3d8779371b768d5972078eb72b2c78c925067b5d8 images/pxeboot/vmlinuz = sha256: b9510ea4212220e85351cbb7f2ebc2b1b0804a6d40ccb93307c165e16d1095db",
"[ ...] No filesystem could mount root, tried: [ ...] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(1,0) [ ...] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.14.0-55.el9.s390x #1 [ ...] [ ...] Call Trace: [ ...] ([<...>] show_trace+0x.../0x...) [ ...] [<...>] show_stack+0x.../0x [ ...] [<...>] panic+0x.../0x [ ...] [<...>] mount_block_root+0x.../0x [ ...] [<...>] prepare_namespace+0x.../0x [ ...] [<...>] kernel_init_freeable+0x.../0x [ ...] [<...>] kernel_init+0x.../0x [ ...] [<...>] kernel_thread_starter+0x.../0x [ ...] [<...>] kernel_thread_starter+0x.../0x..."
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_from_installation_media/troubleshooting-after-installation_rhel-installer
|
Chapter 1. Enabling observability for Red Hat Developer Hub on OpenShift Container Platform
|
Chapter 1. Enabling observability for Red Hat Developer Hub on OpenShift Container Platform In OpenShift Container Platform, metrics are exposed through an HTTP service endpoint under the /metrics canonical name. You can create a ServiceMonitor custom resource (CR) to scrape metrics from a service endpoint in a user-defined project. 1.1. Enabling metrics monitoring in a Red Hat Developer Hub Operator installation on an OpenShift Container Platform cluster You can enable and view metrics for an Operator-installed Red Hat Developer Hub instance from the Developer perspective of the OpenShift Container Platform web console. Prerequisites Your OpenShift Container Platform cluster has monitoring for user-defined projects enabled. You have installed Red Hat Developer Hub on OpenShift Container Platform using the Red Hat Developer Hub Operator. You have installed the OpenShift CLI ( oc ). Procedure Currently, the Red Hat Developer Hub Operator does not support creating a ServiceMonitor custom resource (CR) by default. You must complete the following steps to create a ServiceMonitor CR to scrape metrics from the endpoint. Create the ServiceMonitor CR as a YAML file: apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: <custom_resource_name> 1 namespace: <project_name> 2 labels: app.kubernetes.io/instance: <custom_resource_name> app.kubernetes.io/name: backstage spec: namespaceSelector: matchNames: - <project_name> selector: matchLabels: rhdh.redhat.com/app: backstage-<custom_resource_name> endpoints: - port: backend path: '/metrics' 1 Replace <custom_resource_name> with the name of your Red Hat Developer Hub CR. 2 Replace <project_name> with the name of the OpenShift Container Platform project where your Red Hat Developer Hub instance is running. Apply the ServiceMonitor CR by running the following command: oc apply -f <filename> Verification From the Developer perspective in the OpenShift Container Platform web console, select the Observe view. Click the Metrics tab to view metrics for Red Hat Developer Hub pods. 1.2. Enabling metrics monitoring in a Helm chart installation on an OpenShift Container Platform cluster You can enable and view metrics for a Red Hat Developer Hub Helm deployment from the Developer perspective of the OpenShift Container Platform web console. Prerequisites Your OpenShift Container Platform cluster has monitoring for user-defined projects enabled. You have installed Red Hat Developer Hub on OpenShift Container Platform using the Helm chart. Procedure From the Developer perspective in the OpenShift Container Platform web console, select the Topology view. Click the overflow menu of the Red Hat Developer Hub Helm chart, and select Upgrade . On the Upgrade Helm Release page, select the YAML view option in Configure via , then configure the metrics section in the YAML, as shown in the following example: upstream: # ... metrics: serviceMonitor: enabled: true path: /metrics # ... Click Upgrade . Verification From the Developer perspective in the OpenShift Container Platform web console, select the Observe view. Click the Metrics tab to view metrics for Red Hat Developer Hub pods. Additional resources OpenShift Container Platform - Managing metrics
|
[
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: <custom_resource_name> 1 namespace: <project_name> 2 labels: app.kubernetes.io/instance: <custom_resource_name> app.kubernetes.io/name: backstage spec: namespaceSelector: matchNames: - <project_name> selector: matchLabels: rhdh.redhat.com/app: backstage-<custom_resource_name> endpoints: - port: backend path: '/metrics'",
"apply -f <filename>",
"upstream: metrics: serviceMonitor: enabled: true path: /metrics"
] |
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/monitoring_and_logging/assembly-rhdh-observability
|
Chapter 16. SNMP Information Tapset
|
Chapter 16. SNMP Information Tapset This family of probe points is used to probe socket activities to provide SNMP type information. It contains the following functions and probe points:
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/snmp-dot-stp
|
Chapter 26. OpenShift SDN network plugin
|
Chapter 26. OpenShift SDN network plugin 26.1. About the OpenShift SDN network plugin Part of Red Hat OpenShift Networking, OpenShift SDN is a network plugin that uses a software-defined networking (SDN) approach to provide a unified cluster network that enables communication between pods across the OpenShift Container Platform cluster. This pod network is established and maintained by OpenShift SDN, which configures an overlay network by using Open vSwitch (OVS). Important For a cloud controller manager (CCM) with the --cloud-provider=external option set to cloud-provider-vsphere , a known issue exists for a cluster that operates in a networking environment with multiple subnets. When you upgrade your cluster from OpenShift Container Platform 4.12 to OpenShift Container Platform 4.13, the CCM selects a wrong node IP address and this operation generates an error message in the namespaces/openshift-cloud-controller-manager/pods/vsphere-cloud-controller-manager logs. The error message indicates a mismatch with the node IP address and the vsphere-cloud-controller-manager pod IP address in your cluster. The known issue might not impact the cluster upgrade operation, but you can set the correct IP address in both the nodeNetworking.external.networkSubnetCidr and the nodeNetworking.internal.networkSubnetCidr parameters for the nodeNetworking object that your cluster uses for its networking requirements. 26.1.1. OpenShift SDN network isolation modes OpenShift SDN provides three SDN modes for configuring the pod network: Network policy mode allows project administrators to configure their own isolation policies using NetworkPolicy objects. Network policy is the default mode in OpenShift Container Platform 4.13. Multitenant mode provides project-level isolation for pods and services. Pods from different projects cannot send packets to or receive packets from pods and services of a different project. You can disable isolation for a project, allowing it to send network traffic to all pods and services in the entire cluster and receive network traffic from those pods and services. Subnet mode provides a flat pod network where every pod can communicate with every other pod and service. The network policy mode provides the same functionality as subnet mode. 26.1.2. Supported network plugin feature matrix Red Hat OpenShift Networking offers two options for the network plugin, OpenShift SDN and OVN-Kubernetes, for the network plugin. The following table summarizes the current feature support for both network plugins: Table 26.1. Default CNI network plugin feature comparison Feature OpenShift SDN OVN-Kubernetes Egress IPs Supported Supported Egress firewall Supported Supported [1] Egress router Supported Supported [2] Hybrid networking Not supported Supported IPsec encryption for intra-cluster communication Not supported Supported IPv4 single-stack Supported Supported IPv6 single-stack Not supported Supported [3] IPv4/IPv6 dual-stack Not Supported Supported [4] IPv6/IPv4 dual-stack Not supported Supported [5] Kubernetes network policy Supported Supported Kubernetes network policy logs Not supported Supported Hardware offloading Not supported Supported Multicast Supported Supported Egress firewall is also known as egress network policy in OpenShift SDN. This is not the same as network policy egress. Egress router for OVN-Kubernetes supports only redirect mode. IPv6 single-stack networking on a bare-metal platform. IPv4/IPv6 dual-stack networking on bare-metal, VMware vSphere (installer-provisioned infrastructure installations only), IBM Power(R), and IBM Z(R) platforms. On VMware vSphere, dual-stack networking limitations exist. IPv6/IPv4 dual-stack networking on bare-metal and IBM Power(R) platforms. Additional resources For more information about dual-stack networking limitations on VMware vSphere, see Optional: Deploying with dual-stack networking . 26.2. Migrating to the OpenShift SDN network plugin As a cluster administrator, you can migrate to the OpenShift SDN network plugin from the OVN-Kubernetes network plugin. To learn more about OpenShift SDN, read About the OpenShift SDN network plugin . 26.2.1. How the migration process works The following table summarizes the migration process by segmenting between the user-initiated steps in the process and the actions that the migration performs in response. Table 26.2. Migrating to OpenShift SDN from OVN-Kubernetes User-initiated steps Migration activity Set the migration field of the Network.operator.openshift.io custom resource (CR) named cluster to OpenShiftSDN . Make sure the migration field is null before setting it to a value. Cluster Network Operator (CNO) Updates the status of the Network.config.openshift.io CR named cluster accordingly. Machine Config Operator (MCO) Rolls out an update to the systemd configuration necessary for OpenShift SDN; the MCO updates a single machine per pool at a time by default, causing the total time the migration takes to increase with the size of the cluster. Update the networkType field of the Network.config.openshift.io CR. CNO Performs the following actions: Destroys the OVN-Kubernetes control plane pods. Deploys the OpenShift SDN control plane pods. Updates the Multus objects to reflect the new network plugin. Reboot each node in the cluster. Cluster As nodes reboot, the cluster assigns IP addresses to pods on the OpenShift SDN cluster network. 26.2.2. Migrating to the OpenShift SDN network plugin Cluster administrators can roll back to the OpenShift SDN Container Network Interface (CNI) network plugin by using the offline migration method. During the migration you must manually reboot every node in your cluster. With the offline migration method, there is some downtime, during which your cluster is unreachable. Prerequisites Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. A cluster installed on infrastructure configured with the OVN-Kubernetes network plugin. A recent backup of the etcd database is available. A reboot can be triggered manually for each node. The cluster is in a known good state, without any errors. Procedure Stop all of the machine configuration pools managed by the Machine Config Operator (MCO): Stop the master configuration pool by entering the following command in your CLI: USD oc patch MachineConfigPool master --type='merge' --patch \ '{ "spec": { "paused": true } }' Stop the worker machine configuration pool by entering the following command in your CLI: USD oc patch MachineConfigPool worker --type='merge' --patch \ '{ "spec":{ "paused": true } }' To prepare for the migration, set the migration field to null by entering the following command in your CLI: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }' Check that the migration status is empty for the Network.config.openshift.io object by entering the following command in your CLI. Empty command output indicates that the object is not in a migration operation. USD oc get Network.config cluster -o jsonpath='{.status.migration}' Apply the patch to the Network.operator.openshift.io object to set the network plugin back to OpenShift SDN by entering the following command in your CLI: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OpenShiftSDN" } } }' Important If you applied the patch to the Network.config.openshift.io object before the patch operation finalizes on the Network.operator.openshift.io object, the Cluster Network Operator (CNO) enters into a degradation state and this causes a slight delay until the CNO recovers from the degraded state. Confirm that the migration status of the network plugin for the Network.config.openshift.io cluster object is OpenShiftSDN by entering the following command in your CLI: USD oc get Network.config cluster -o jsonpath='{.status.migration.networkType}' Apply the patch to the Network.config.openshift.io object to set the network plugin back to OpenShift SDN by entering the following command in your CLI: USD oc patch Network.config.openshift.io cluster --type='merge' \ --patch '{ "spec": { "networkType": "OpenShiftSDN" } }' Optional: Disable automatic migration of several OVN-Kubernetes capabilities to the OpenShift SDN equivalents: Egress IPs Egress firewall Multicast To disable automatic migration of the configuration for any of the previously noted OpenShift SDN features, specify the following keys: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OpenShiftSDN", "features": { "egressIP": <bool>, "egressFirewall": <bool>, "multicast": <bool> } } } }' where: bool : Specifies whether to enable migration of the feature. The default is true . Optional: You can customize the following settings for OpenShift SDN to meet your network infrastructure requirements: Maximum transmission unit (MTU) VXLAN port To customize either or both of the previously noted settings, customize and enter the following command in your CLI. If you do not need to change the default value, omit the key from the patch. USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "openshiftSDNConfig":{ "mtu":<mtu>, "vxlanPort":<port> }}}}' mtu The MTU for the VXLAN overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to 50 less than the smallest node MTU value. port The UDP port for the VXLAN overlay network. If a value is not specified, the default is 4789 . The port cannot be the same as the Geneve port that is used by OVN-Kubernetes. The default value for the Geneve port is 6081 . Example patch command USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "openshiftSDNConfig":{ "mtu":1200 }}}}' Reboot each node in your cluster. You can reboot the nodes in your cluster with either of the following approaches: With the oc rsh command, you can use a bash script similar to the following: #!/bin/bash readarray -t POD_NODES <<< "USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1" "USD7}')" for i in "USD{POD_NODES[@]}" do read -r POD NODE <<< "USDi" until oc rsh -n openshift-machine-config-operator "USDPOD" chroot /rootfs shutdown -r +1 do echo "cannot reboot node USDNODE, retry" && sleep 3 done done With the ssh command, you can use a bash script similar to the following. The script assumes that you have configured sudo to not prompt for a password. #!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}') do echo "reboot node USDip" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done Wait until the Multus daemon set rollout completes. Run the following command to see your rollout status: USD oc -n openshift-multus rollout status daemonset/multus The name of the Multus pods is in the form of multus-<xxxxx> where <xxxxx> is a random sequence of letters. It might take several moments for the pods to restart. Example output Waiting for daemon set "multus" rollout to finish: 1 out of 6 new pods have been updated... ... Waiting for daemon set "multus" rollout to finish: 5 of 6 updated pods are available... daemon set "multus" successfully rolled out After the nodes in your cluster have rebooted and the multus pods are rolled out, start all of the machine configuration pools by running the following commands:: Start the master configuration pool: USD oc patch MachineConfigPool master --type='merge' --patch \ '{ "spec": { "paused": false } }' Start the worker configuration pool: USD oc patch MachineConfigPool worker --type='merge' --patch \ '{ "spec": { "paused": false } }' As the MCO updates machines in each config pool, it reboots each node. By default the MCO updates a single machine per pool at a time, so the time that the migration requires to complete grows with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command in your CLI: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command in your CLI: USD oc get machineconfig <config_name> -o yaml where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. Confirm that the migration succeeded: To confirm that the network plugin is OpenShift SDN, enter the following command in your CLI. The value of status.networkType must be OpenShiftSDN . USD oc get Network.config/cluster -o jsonpath='{.status.networkType}{"\n"}' To confirm that the cluster nodes are in the Ready state, enter the following command in your CLI: USD oc get nodes If a node is stuck in the NotReady state, investigate the machine config daemon pod logs and resolve any errors. To list the pods, enter the following command in your CLI: USD oc get pod -n openshift-machine-config-operator Example output NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h The names for the config daemon pods are in the following format: machine-config-daemon-<seq> . The <seq> value is a random five character alphanumeric sequence. To display the pod log for each machine config daemon pod shown in the output, enter the following command in your CLI: USD oc logs <pod> -n openshift-machine-config-operator where pod is the name of a machine config daemon pod. Resolve any errors in the logs shown by the output from the command. To confirm that your pods are not in an error state, enter the following command in your CLI: USD oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}' If pods on a node are in an error state, reboot that node. Complete the following steps only if the migration succeeds and your cluster is in a good state: To remove the migration configuration from the Cluster Network Operator configuration object, enter the following command in your CLI: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }' To remove the OVN-Kubernetes configuration, enter the following command in your CLI: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "defaultNetwork": { "ovnKubernetesConfig":null } } }' To remove the OVN-Kubernetes network provider namespace, enter the following command in your CLI: USD oc delete namespace openshift-ovn-kubernetes 26.2.3. Additional resources Configuration parameters for the OpenShift SDN network plugin Backing up etcd About network policy OpenShift SDN capabilities Configuring egress IPs for a project Configuring an egress firewall for a project Enabling multicast for a project Network [operator.openshift.io/v1 ] 26.3. Rolling back to the OVN-Kubernetes network plugin As a cluster administrator, you can rollback to the OVN-Kubernetes network plugin from the OpenShift SDN network plugin if the migration to OpenShift SDN is unsuccessful. To learn more about OVN-Kubernetes, read About the OVN-Kubernetes network plugin . 26.3.1. Migrating to the OVN-Kubernetes network plugin As a cluster administrator, you can change the network plugin for your cluster to OVN-Kubernetes. During the migration, you must reboot every node in your cluster. Important While performing the migration, your cluster is unavailable and workloads might be interrupted. Perform the migration only when an interruption in service is acceptable. Prerequisites You have a cluster configured with the OpenShift SDN CNI network plugin in the network policy isolation mode. You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have a recent backup of the etcd database. You can manually reboot each node. You checked that your cluster is in a known good state without any errors. You created a security group rule that allows User Datagram Protocol (UDP) packets on port 6081 for all nodes on all cloud platforms. Procedure To backup the configuration for the cluster network, enter the following command: USD oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yaml Verify that the OVN_SDN_MIGRATION_TIMEOUT environment variable is set and is equal to 0s by running the following command: #!/bin/bash if [ -n "USDOVN_SDN_MIGRATION_TIMEOUT" ] && [ "USDOVN_SDN_MIGRATION_TIMEOUT" = "0s" ]; then unset OVN_SDN_MIGRATION_TIMEOUT fi #loops the timeout command of the script to repeatedly check the cluster Operators until all are available. co_timeout=USD{OVN_SDN_MIGRATION_TIMEOUT:-1200s} timeout "USDco_timeout" bash <<EOT until oc wait co --all --for='condition=AVAILABLE=True' --timeout=10s && \ oc wait co --all --for='condition=PROGRESSING=False' --timeout=10s && \ oc wait co --all --for='condition=DEGRADED=False' --timeout=10s; do sleep 10 echo "Some ClusterOperators Degraded=False,Progressing=True,or Available=False"; done EOT Remove the configuration from the Cluster Network Operator (CNO) configuration object by running the following command: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{"spec":{"migration":null}}' Delete the NodeNetworkConfigurationPolicy (NNCP) custom resource (CR) that defines the primary network interface for the OpenShift SDN network plugin by completing the following steps: Check that the existing NNCP CR bonded the primary interface to your cluster by entering the following command: USD oc get nncp Example output NAME STATUS REASON bondmaster0 Available SuccessfullyConfigured Network Manager stores the connection profile for the bonded primary interface in the /etc/NetworkManager/system-connections system path. Remove the NNCP from your cluster: USD oc delete nncp <nncp_manifest_filename> To prepare all the nodes for the migration, set the migration field on the CNO configuration object by running the following command: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OVNKubernetes" } } }' Note This step does not deploy OVN-Kubernetes immediately. Instead, specifying the migration field triggers the Machine Config Operator (MCO) to apply new machine configs to all the nodes in the cluster in preparation for the OVN-Kubernetes deployment. Check that the reboot is finished by running the following command: USD oc get mcp Check that all cluster Operators are available by running the following command: USD oc get co Alternatively: You can disable automatic migration of several OpenShift SDN capabilities to the OVN-Kubernetes equivalents: Egress IPs Egress firewall Multicast To disable automatic migration of the configuration for any of the previously noted OpenShift SDN features, specify the following keys: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OVNKubernetes", "features": { "egressIP": <bool>, "egressFirewall": <bool>, "multicast": <bool> } } } }' where: bool : Specifies whether to enable migration of the feature. The default is true . Optional: You can customize the following settings for OVN-Kubernetes to meet your network infrastructure requirements: Maximum transmission unit (MTU). Consider the following before customizing the MTU for this optional step: If you use the default MTU, and you want to keep the default MTU during migration, this step can be ignored. If you used a custom MTU, and you want to keep the custom MTU during migration, you must declare the custom MTU value in this step. This step does not work if you want to change the MTU value during migration. Instead, you must first follow the instructions for "Changing the cluster MTU". You can then keep the custom MTU value by performing this procedure and declaring the custom MTU value in this step. Note OpenShift-SDN and OVN-Kubernetes have different overlay overhead. MTU values should be selected by following the guidelines found on the "MTU value selection" page. Geneve (Generic Network Virtualization Encapsulation) overlay network port OVN-Kubernetes IPv4 internal subnet To customize either of the previously noted settings, enter and customize the following command. If you do not need to change the default value, omit the key from the patch. USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "ovnKubernetesConfig":{ "mtu":<mtu>, "genevePort":<port>, "v4InternalSubnet":"<ipv4_subnet>" }}}}' where: mtu The MTU for the Geneve overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to 100 less than the smallest node MTU value. port The UDP port for the Geneve overlay network. If a value is not specified, the default is 6081 . The port cannot be the same as the VXLAN port that is used by OpenShift SDN. The default value for the VXLAN port is 4789 . ipv4_subnet An IPv4 address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is 100.64.0.0/16 . Example patch command to update mtu field USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "ovnKubernetesConfig":{ "mtu":1200 }}}}' As the MCO updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get mcp A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the MCO updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command: USD oc get machineconfig <config_name> -o yaml | grep ExecStart where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. The machine config must include the following update to the systemd configuration: ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes If a node is stuck in the NotReady state, investigate the machine config daemon pod logs and resolve any errors. To list the pods, enter the following command: USD oc get pod -n openshift-machine-config-operator Example output NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h The names for the config daemon pods are in the following format: machine-config-daemon-<seq> . The <seq> value is a random five character alphanumeric sequence. Display the pod log for the first machine config daemon pod shown in the output by enter the following command: USD oc logs <pod> -n openshift-machine-config-operator where pod is the name of a machine config daemon pod. Resolve any errors in the logs shown by the output from the command. To start the migration, configure the OVN-Kubernetes network plugin by using one of the following commands: To specify the network provider without changing the cluster network IP address block, enter the following command: USD oc patch Network.config.openshift.io cluster \ --type='merge' --patch '{ "spec": { "networkType": "OVNKubernetes" } }' To specify a different cluster network IP address block, enter the following command: USD oc patch Network.config.openshift.io cluster \ --type='merge' --patch '{ "spec": { "clusterNetwork": [ { "cidr": "<cidr>", "hostPrefix": <prefix> } ], "networkType": "OVNKubernetes" } }' where cidr is a CIDR block and prefix is the slice of the CIDR block apportioned to each node in your cluster. You cannot use any CIDR block that overlaps with the 100.64.0.0/16 CIDR block because the OVN-Kubernetes network provider uses this block internally. Important You cannot change the service network address block during the migration. Verify that the Multus daemon set rollout is complete before continuing with subsequent steps: USD oc -n openshift-multus rollout status daemonset/multus The name of the Multus pods is in the form of multus-<xxxxx> where <xxxxx> is a random sequence of letters. It might take several moments for the pods to restart. Example output Waiting for daemon set "multus" rollout to finish: 1 out of 6 new pods have been updated... ... Waiting for daemon set "multus" rollout to finish: 5 of 6 updated pods are available... daemon set "multus" successfully rolled out To complete changing the network plugin, reboot each node in your cluster. You can reboot the nodes in your cluster with either of the following approaches: Important The following scripts reboot all of the nodes in the cluster at the same time. This can cause your cluster to be unstable. Another option is to reboot your nodes manually one at a time. Rebooting nodes one-by-one causes considerable downtime in a cluster with many nodes. Cluster Operators will not work correctly before you reboot the nodes. With the oc rsh command, you can use a bash script similar to the following: #!/bin/bash readarray -t POD_NODES <<< "USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1" "USD7}')" for i in "USD{POD_NODES[@]}" do read -r POD NODE <<< "USDi" until oc rsh -n openshift-machine-config-operator "USDPOD" chroot /rootfs shutdown -r +1 do echo "cannot reboot node USDNODE, retry" && sleep 3 done done With the ssh command, you can use a bash script similar to the following. The script assumes that you have configured sudo to not prompt for a password. #!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}') do echo "reboot node USDip" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done Confirm that the migration succeeded: To confirm that the network plugin is OVN-Kubernetes, enter the following command. The value of status.networkType must be OVNKubernetes . USD oc get network.config/cluster -o jsonpath='{.status.networkType}{"\n"}' To confirm that the cluster nodes are in the Ready state, enter the following command: USD oc get nodes To confirm that your pods are not in an error state, enter the following command: USD oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}' If pods on a node are in an error state, reboot that node. To confirm that all of the cluster Operators are not in an abnormal state, enter the following command: USD oc get co The status of every cluster Operator must be the following: AVAILABLE="True" , PROGRESSING="False" , DEGRADED="False" . If a cluster Operator is not available or degraded, check the logs for the cluster Operator for more information. Complete the following steps only if the migration succeeds and your cluster is in a good state: To remove the migration configuration from the CNO configuration object, enter the following command: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }' To remove custom configuration for the OpenShift SDN network provider, enter the following command: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "defaultNetwork": { "openshiftSDNConfig": null } } }' To remove the OpenShift SDN network provider namespace, enter the following command: USD oc delete namespace openshift-sdn steps Optional: After cluster migration, you can convert your IPv4 single-stack cluster to a dual-network cluster network that supports IPv4 and IPv6 address families. For more information, see "Converting to IPv4/IPv6 dual-stack networking". 26.4. Configuring egress IPs for a project As a cluster administrator, you can configure the OpenShift SDN Container Network Interface (CNI) network plugin to assign one or more egress IP addresses to a project. 26.4.1. Egress IP address architectural design and implementation The OpenShift Container Platform egress IP address functionality allows you to ensure that the traffic from one or more pods in one or more namespaces has a consistent source IP address for services outside the cluster network. For example, you might have a pod that periodically queries a database that is hosted on a server outside of your cluster. To enforce access requirements for the server, a packet filtering device is configured to allow traffic only from specific IP addresses. To ensure that you can reliably allow access to the server from only that specific pod, you can configure a specific egress IP address for the pod that makes the requests to the server. An egress IP address assigned to a namespace is different from an egress router, which is used to send traffic to specific destinations. In some cluster configurations, application pods and ingress router pods run on the same node. If you configure an egress IP address for an application project in this scenario, the IP address is not used when you send a request to a route from the application project. An egress IP address is implemented as an additional IP address on the primary network interface of a node and must be in the same subnet as the primary IP address of the node. The additional IP address must not be assigned to any other node in the cluster. Important Egress IP addresses must not be configured in any Linux network configuration files, such as ifcfg-eth0 . 26.4.1.1. Platform support Support for the egress IP address functionality on various platforms is summarized in the following table: Platform Supported Bare metal Yes VMware vSphere Yes Red Hat OpenStack Platform (RHOSP) Yes Amazon Web Services (AWS) Yes Google Cloud Platform (GCP) Yes Microsoft Azure Yes IBM Z and IBM(R) LinuxONE Yes IBM Z and IBM(R) LinuxONE for Red Hat Enterprise Linux (RHEL) KVM Yes IBM Power Yes Important The assignment of egress IP addresses to control plane nodes with the EgressIP feature is not supported on a cluster provisioned on Amazon Web Services (AWS). ( BZ#2039656 ) 26.4.1.2. Public cloud platform considerations For clusters provisioned on public cloud infrastructure, there is a constraint on the absolute number of assignable IP addresses per node. The maximum number of assignable IP addresses per node, or the IP capacity , can be described in the following formula: IP capacity = public cloud default capacity - sum(current IP assignments) While the Egress IPs capability manages the IP address capacity per node, it is important to plan for this constraint in your deployments. For example, for a cluster installed on bare-metal infrastructure with 8 nodes you can configure 150 egress IP addresses. However, if a public cloud provider limits IP address capacity to 10 IP addresses per node, the total number of assignable IP addresses is only 80. To achieve the same IP address capacity in this example cloud provider, you would need to allocate 7 additional nodes. To confirm the IP capacity and subnets for any node in your public cloud environment, you can enter the oc get node <node_name> -o yaml command. The cloud.network.openshift.io/egress-ipconfig annotation includes capacity and subnet information for the node. The annotation value is an array with a single object with fields that provide the following information for the primary network interface: interface : Specifies the interface ID on AWS and Azure and the interface name on GCP. ifaddr : Specifies the subnet mask for one or both IP address families. capacity : Specifies the IP address capacity for the node. On AWS, the IP address capacity is provided per IP address family. On Azure and GCP, the IP address capacity includes both IPv4 and IPv6 addresses. Automatic attachment and detachment of egress IP addresses for traffic between nodes are available. This allows for traffic from many pods in namespaces to have a consistent source IP address to locations outside of the cluster. This also supports OpenShift SDN and OVN-Kubernetes, which is the default networking plugin in Red Hat OpenShift Networking in OpenShift Container Platform 4.13. Note The RHOSP egress IP address feature creates a Neutron reservation port called egressip-<IP address> . Using the same RHOSP user as the one used for the OpenShift Container Platform cluster installation, you can assign a floating IP address to this reservation port to have a predictable SNAT address for egress traffic. When an egress IP address on an RHOSP network is moved from one node to another, because of a node failover, for example, the Neutron reservation port is removed and recreated. This means that the floating IP association is lost and you need to manually reassign the floating IP address to the new reservation port. Note When an RHOSP cluster administrator assigns a floating IP to the reservation port, OpenShift Container Platform cannot delete the reservation port. The CloudPrivateIPConfig object cannot perform delete and move operations until an RHOSP cluster administrator unassigns the floating IP from the reservation port. The following examples illustrate the annotation from nodes on several public cloud providers. The annotations are indented for readability. Example cloud.network.openshift.io/egress-ipconfig annotation on AWS cloud.network.openshift.io/egress-ipconfig: [ { "interface":"eni-078d267045138e436", "ifaddr":{"ipv4":"10.0.128.0/18"}, "capacity":{"ipv4":14,"ipv6":15} } ] Example cloud.network.openshift.io/egress-ipconfig annotation on GCP cloud.network.openshift.io/egress-ipconfig: [ { "interface":"nic0", "ifaddr":{"ipv4":"10.0.128.0/18"}, "capacity":{"ip":14} } ] The following sections describe the IP address capacity for supported public cloud environments for use in your capacity calculation. 26.4.1.2.1. Amazon Web Services (AWS) IP address capacity limits On AWS, constraints on IP address assignments depend on the instance type configured. For more information, see IP addresses per network interface per instance type 26.4.1.2.2. Google Cloud Platform (GCP) IP address capacity limits On GCP, the networking model implements additional node IP addresses through IP address aliasing, rather than IP address assignments. However, IP address capacity maps directly to IP aliasing capacity. The following capacity limits exist for IP aliasing assignment: Per node, the maximum number of IP aliases, both IPv4 and IPv6, is 100. Per VPC, the maximum number of IP aliases is unspecified, but OpenShift Container Platform scalability testing reveals the maximum to be approximately 15,000. For more information, see Per instance quotas and Alias IP ranges overview . 26.4.1.2.3. Microsoft Azure IP address capacity limits On Azure, the following capacity limits exist for IP address assignment: Per NIC, the maximum number of assignable IP addresses, for both IPv4 and IPv6, is 256. Per virtual network, the maximum number of assigned IP addresses cannot exceed 65,536. For more information, see Networking limits . 26.4.1.3. Limitations The following limitations apply when using egress IP addresses with the OpenShift SDN network plugin: You cannot use manually assigned and automatically assigned egress IP addresses on the same nodes. If you manually assign egress IP addresses from an IP address range, you must not make that range available for automatic IP assignment. You cannot share egress IP addresses across multiple namespaces using the OpenShift SDN egress IP address implementation. If you need to share IP addresses across namespaces, the OVN-Kubernetes network plugin egress IP address implementation allows you to span IP addresses across multiple namespaces. Note If you use OpenShift SDN in multitenant mode, you cannot use egress IP addresses with any namespace that is joined to another namespace by the projects that are associated with them. For example, if project1 and project2 are joined by running the oc adm pod-network join-projects --to=project1 project2 command, neither project can use an egress IP address. For more information, see BZ#1645577 . 26.4.1.4. IP address assignment approaches You can assign egress IP addresses to namespaces by setting the egressIPs parameter of the NetNamespace object. After an egress IP address is associated with a project, OpenShift SDN allows you to assign egress IP addresses to hosts in two ways: In the automatically assigned approach, an egress IP address range is assigned to a node. In the manually assigned approach, a list of one or more egress IP address is assigned to a node. Namespaces that request an egress IP address are matched with nodes that can host those egress IP addresses, and then the egress IP addresses are assigned to those nodes. If the egressIPs parameter is set on a NetNamespace object, but no node hosts that egress IP address, then egress traffic from the namespace will be dropped. High availability of nodes is automatic. If a node that hosts an egress IP address is unreachable and there are nodes that are able to host that egress IP address, then the egress IP address will move to a new node. When the unreachable node comes back online, the egress IP address automatically moves to balance egress IP addresses across nodes. 26.4.1.4.1. Considerations when using automatically assigned egress IP addresses When using the automatic assignment approach for egress IP addresses the following considerations apply: You set the egressCIDRs parameter of each node's HostSubnet resource to indicate the range of egress IP addresses that can be hosted by a node. OpenShift Container Platform sets the egressIPs parameter of the HostSubnet resource based on the IP address range you specify. If the node hosting the namespace's egress IP address is unreachable, OpenShift Container Platform will reassign the egress IP address to another node with a compatible egress IP address range. The automatic assignment approach works best for clusters installed in environments with flexibility in associating additional IP addresses with nodes. 26.4.1.4.2. Considerations when using manually assigned egress IP addresses This approach allows you to control which nodes can host an egress IP address. Note If your cluster is installed on public cloud infrastructure, you must ensure that each node that you assign egress IP addresses to has sufficient spare capacity to host the IP addresses. For more information, see "Platform considerations" in a section. When using the manual assignment approach for egress IP addresses the following considerations apply: You set the egressIPs parameter of each node's HostSubnet resource to indicate the IP addresses that can be hosted by a node. Multiple egress IP addresses per namespace are supported. If a namespace has multiple egress IP addresses and those addresses are hosted on multiple nodes, the following additional considerations apply: If a pod is on a node that is hosting an egress IP address, that pod always uses the egress IP address on the node. If a pod is not on a node that is hosting an egress IP address, that pod uses an egress IP address at random. 26.4.2. Configuring automatically assigned egress IP addresses for a namespace In OpenShift Container Platform you can enable automatic assignment of an egress IP address for a specific namespace across one or more nodes. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Update the NetNamespace object with the egress IP address using the following JSON: USD oc patch netnamespace <project_name> --type=merge -p \ '{ "egressIPs": [ "<ip_address>" ] }' where: <project_name> Specifies the name of the project. <ip_address> Specifies one or more egress IP addresses for the egressIPs array. For example, to assign project1 to an IP address of 192.168.1.100 and project2 to an IP address of 192.168.1.101: USD oc patch netnamespace project1 --type=merge -p \ '{"egressIPs": ["192.168.1.100"]}' USD oc patch netnamespace project2 --type=merge -p \ '{"egressIPs": ["192.168.1.101"]}' Note Because OpenShift SDN manages the NetNamespace object, you can make changes only by modifying the existing NetNamespace object. Do not create a new NetNamespace object. Indicate which nodes can host egress IP addresses by setting the egressCIDRs parameter for each host using the following JSON: USD oc patch hostsubnet <node_name> --type=merge -p \ '{ "egressCIDRs": [ "<ip_address_range>", "<ip_address_range>" ] }' where: <node_name> Specifies a node name. <ip_address_range> Specifies an IP address range in CIDR format. You can specify more than one address range for the egressCIDRs array. For example, to set node1 and node2 to host egress IP addresses in the range 192.168.1.0 to 192.168.1.255: USD oc patch hostsubnet node1 --type=merge -p \ '{"egressCIDRs": ["192.168.1.0/24"]}' USD oc patch hostsubnet node2 --type=merge -p \ '{"egressCIDRs": ["192.168.1.0/24"]}' OpenShift Container Platform automatically assigns specific egress IP addresses to available nodes in a balanced way. In this case, it assigns the egress IP address 192.168.1.100 to node1 and the egress IP address 192.168.1.101 to node2 or vice versa. 26.4.3. Configuring manually assigned egress IP addresses for a namespace In OpenShift Container Platform you can associate one or more egress IP addresses with a namespace. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Update the NetNamespace object by specifying the following JSON object with the desired IP addresses: USD oc patch netnamespace <project_name> --type=merge -p \ '{ "egressIPs": [ "<ip_address>" ] }' where: <project_name> Specifies the name of the project. <ip_address> Specifies one or more egress IP addresses for the egressIPs array. For example, to assign the project1 project to the IP addresses 192.168.1.100 and 192.168.1.101 : USD oc patch netnamespace project1 --type=merge \ -p '{"egressIPs": ["192.168.1.100","192.168.1.101"]}' To provide high availability, set the egressIPs value to two or more IP addresses on different nodes. If multiple egress IP addresses are set, then pods use all egress IP addresses roughly equally. Note Because OpenShift SDN manages the NetNamespace object, you can make changes only by modifying the existing NetNamespace object. Do not create a new NetNamespace object. Manually assign the egress IP address to the node hosts. If your cluster is installed on public cloud infrastructure, you must confirm that the node has available IP address capacity. Set the egressIPs parameter on the HostSubnet object on the node host. Using the following JSON, include as many IP addresses as you want to assign to that node host: USD oc patch hostsubnet <node_name> --type=merge -p \ '{ "egressIPs": [ "<ip_address>", "<ip_address>" ] }' where: <node_name> Specifies a node name. <ip_address> Specifies an IP address. You can specify more than one IP address for the egressIPs array. For example, to specify that node1 should have the egress IPs 192.168.1.100 , 192.168.1.101 , and 192.168.1.102 : USD oc patch hostsubnet node1 --type=merge -p \ '{"egressIPs": ["192.168.1.100", "192.168.1.101", "192.168.1.102"]}' In the example, all egress traffic for project1 will be routed to the node hosting the specified egress IP, and then connected through Network Address Translation (NAT) to that IP address. 26.4.4. Additional resources If you are configuring manual egress IP address assignment, see Platform considerations for information about IP capacity planning. 26.5. Configuring an egress firewall for a project As a cluster administrator, you can create an egress firewall for a project that restricts egress traffic leaving your OpenShift Container Platform cluster. 26.5.1. How an egress firewall works in a project As a cluster administrator, you can use an egress firewall to limit the external hosts that some or all pods can access from within the cluster. An egress firewall supports the following scenarios: A pod can only connect to internal hosts and cannot initiate connections to the public internet. A pod can only connect to the public internet and cannot initiate connections to internal hosts that are outside the OpenShift Container Platform cluster. A pod cannot reach specified internal subnets or hosts outside the OpenShift Container Platform cluster. A pod can connect to only specific external hosts. For example, you can allow one project access to a specified IP range but deny the same access to a different project. Or you can restrict application developers from updating from Python pip mirrors, and force updates to come only from approved sources. Note Egress firewall does not apply to the host network namespace. Pods with host networking enabled are unaffected by egress firewall rules. You configure an egress firewall policy by creating an EgressNetworkPolicy custom resource (CR) object. The egress firewall matches network traffic that meets any of the following criteria: An IP address range in CIDR format A DNS name that resolves to an IP address Important If your egress firewall includes a deny rule for 0.0.0.0/0 , access to your OpenShift Container Platform API servers is blocked. You must either add allow rules for each IP address or use the nodeSelector type allow rule in your egress policy rules to connect to API servers. The following example illustrates the order of the egress firewall rules necessary to ensure API server access: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow # ... - to: cidrSelector: 0.0.0.0/0 3 type: Deny 1 The namespace for the egress firewall. 2 The IP address range that includes your OpenShift Container Platform API servers. 3 A global deny rule prevents access to the OpenShift Container Platform API servers. To find the IP address for your API servers, run oc get ep kubernetes -n default . For more information, see BZ#1988324 . Important You must have OpenShift SDN configured to use either the network policy or multitenant mode to configure an egress firewall. If you use network policy mode, an egress firewall is compatible with only one policy per namespace and will not work with projects that share a network, such as global projects. Warning Egress firewall rules do not apply to traffic that goes through routers. Any user with permission to create a Route CR object can bypass egress firewall policy rules by creating a route that points to a forbidden destination. 26.5.1.1. Limitations of an egress firewall An egress firewall has the following limitations: No project can have more than one EgressNetworkPolicy object. Important The creation of more than one EgressNetworkPolicy object is allowed, however it should not be done. When you create more than one EgressNetworkPolicy object, the following message is returned: dropping all rules . In actuality, all external traffic is dropped, which can cause security risks for your organization. A maximum of one EgressNetworkPolicy object with a maximum of 1,000 rules can be defined per project. The default project cannot use an egress firewall. When using the OpenShift SDN network plugin in multitenant mode, the following limitations apply: Global projects cannot use an egress firewall. You can make a project global by using the oc adm pod-network make-projects-global command. Projects merged by using the oc adm pod-network join-projects command cannot use an egress firewall in any of the joined projects. If you create a selectorless service and manually define endpoints or EndpointSlices that point to external IPs, traffic to the service IP might still be allowed, even if your EgressNetworkPolicy is configured to deny all egress traffic. This occurs because OpenShift SDN does not fully enforce egress network policies for these external endpoints. Consequently, this might result in unexpected access to external services. Violating any of these restrictions results in a broken egress firewall for the project. Consequently, all external network traffic is dropped, which can cause security risks for your organization. An Egress Firewall resource can be created in the kube-node-lease , kube-public , kube-system , openshift and openshift- projects. 26.5.1.2. Matching order for egress firewall policy rules The egress firewall policy rules are evaluated in the order that they are defined, from first to last. The first rule that matches an egress connection from a pod applies. Any subsequent rules are ignored for that connection. 26.5.1.3. How Domain Name Server (DNS) resolution works If you use DNS names in any of your egress firewall policy rules, proper resolution of the domain names is subject to the following restrictions: Domain name updates are polled based on a time-to-live (TTL) duration. By default, the duration is 30 seconds. When the egress firewall controller queries the local name servers for a domain name, if the response includes a TTL that is less than 30 seconds, the controller sets the duration to the returned value. If the TTL in the response is greater than 30 minutes, the controller sets the duration to 30 minutes. If the TTL is between 30 seconds and 30 minutes, the controller ignores the value and sets the duration to 30 seconds. The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, the egress firewall might not be enforced consistently. Because the egress firewall controller and pods asynchronously poll the same local name server, the pod might obtain the updated IP address before the egress controller does, which causes a race condition. Due to this current limitation, domain name usage in EgressNetworkPolicy objects is only recommended for domains with infrequent IP address changes. Note Using DNS names in your egress firewall policy does not affect local DNS resolution through CoreDNS. However, if your egress firewall policy uses domain names, and an external DNS server handles DNS resolution for an affected pod, you must include egress firewall rules that permit access to the IP addresses of your DNS server. 26.5.2. EgressNetworkPolicy custom resource (CR) object You can define one or more rules for an egress firewall. A rule is either an Allow rule or a Deny rule, with a specification for the traffic that the rule applies to. The following YAML describes an EgressNetworkPolicy CR object: EgressNetworkPolicy object apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: <name> 1 spec: egress: 2 ... 1 A name for your egress firewall policy. 2 A collection of one or more egress network policy rules as described in the following section. 26.5.2.1. EgressNetworkPolicy rules The following YAML describes an egress firewall rule object. The user can select either an IP address range in CIDR format, a domain name, or use the nodeSelector to allow or deny egress traffic. The egress stanza expects an array of one or more objects. Egress policy rule stanza egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4 1 The type of rule. The value must be either Allow or Deny . 2 A stanza describing an egress traffic match rule. A value for either the cidrSelector field or the dnsName field for the rule. You cannot use both fields in the same rule. 3 An IP address range in CIDR format. 4 A domain name. 26.5.2.2. Example EgressNetworkPolicy CR objects The following example defines several egress firewall policy rules: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Allow to: dnsName: www.example.com - type: Deny to: cidrSelector: 0.0.0.0/0 1 A collection of egress firewall policy rule objects. 26.5.3. Creating an egress firewall policy object As a cluster administrator, you can create an egress firewall policy object for a project. Important If the project already has an EgressNetworkPolicy object defined, you must edit the existing policy to make changes to the egress firewall rules. Prerequisites A cluster that uses the OpenShift SDN network plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Create a policy rule: Create a <policy_name>.yaml file where <policy_name> describes the egress policy rules. In the file you created, define an egress policy object. Enter the following command to create the policy object. Replace <policy_name> with the name of the policy and <project> with the project that the rule applies to. USD oc create -f <policy_name>.yaml -n <project> In the following example, a new EgressNetworkPolicy object is created in a project named project1 : USD oc create -f default.yaml -n project1 Example output egressnetworkpolicy.network.openshift.io/v1 created Optional: Save the <policy_name>.yaml file so that you can make changes later. 26.6. Editing an egress firewall for a project As a cluster administrator, you can modify network traffic rules for an existing egress firewall. 26.6.1. Viewing an EgressNetworkPolicy object You can view an EgressNetworkPolicy object in your cluster. Prerequisites A cluster using the OpenShift SDN network plugin. Install the OpenShift Command-line Interface (CLI), commonly known as oc . You must log in to the cluster. Procedure Optional: To view the names of the EgressNetworkPolicy objects defined in your cluster, enter the following command: USD oc get egressnetworkpolicy --all-namespaces To inspect a policy, enter the following command. Replace <policy_name> with the name of the policy to inspect. USD oc describe egressnetworkpolicy <policy_name> Example output Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0 26.7. Editing an egress firewall for a project As a cluster administrator, you can modify network traffic rules for an existing egress firewall. 26.7.1. Editing an EgressNetworkPolicy object As a cluster administrator, you can update the egress firewall for a project. Prerequisites A cluster using the OpenShift SDN network plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Find the name of the EgressNetworkPolicy object for the project. Replace <project> with the name of the project. USD oc get -n <project> egressnetworkpolicy Optional: If you did not save a copy of the EgressNetworkPolicy object when you created the egress network firewall, enter the following command to create a copy. USD oc get -n <project> egressnetworkpolicy <name> -o yaml > <filename>.yaml Replace <project> with the name of the project. Replace <name> with the name of the object. Replace <filename> with the name of the file to save the YAML to. After making changes to the policy rules, enter the following command to replace the EgressNetworkPolicy object. Replace <filename> with the name of the file containing the updated EgressNetworkPolicy object. USD oc replace -f <filename>.yaml 26.8. Removing an egress firewall from a project As a cluster administrator, you can remove an egress firewall from a project to remove all restrictions on network traffic from the project that leaves the OpenShift Container Platform cluster. 26.8.1. Removing an EgressNetworkPolicy object As a cluster administrator, you can remove an egress firewall from a project. Prerequisites A cluster using the OpenShift SDN network plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Find the name of the EgressNetworkPolicy object for the project. Replace <project> with the name of the project. USD oc get -n <project> egressnetworkpolicy Enter the following command to delete the EgressNetworkPolicy object. Replace <project> with the name of the project and <name> with the name of the object. USD oc delete -n <project> egressnetworkpolicy <name> 26.9. Considerations for the use of an egress router pod 26.9.1. About an egress router pod The OpenShift Container Platform egress router pod redirects traffic to a specified remote server from a private source IP address that is not used for any other purpose. An egress router pod can send network traffic to servers that are set up to allow access only from specific IP addresses. Note The egress router pod is not intended for every outgoing connection. Creating large numbers of egress router pods can exceed the limits of your network hardware. For example, creating an egress router pod for every project or application could exceed the number of local MAC addresses that the network interface can handle before reverting to filtering MAC addresses in software. Important The egress router image is not compatible with Amazon AWS, Azure Cloud, or any other cloud platform that does not support layer 2 manipulations due to their incompatibility with macvlan traffic. 26.9.1.1. Egress router modes In redirect mode , an egress router pod configures iptables rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be configured to access the service for the egress router rather than connecting directly to the destination IP. You can access the destination service and port from the application pod by using the curl command. For example: USD curl <router_service_IP> <port> In HTTP proxy mode , an egress router pod runs as an HTTP proxy on port 8080 . This mode only works for clients that are connecting to HTTP-based or HTTPS-based services, but usually requires fewer changes to the client pods to get them to work. Many programs can be told to use an HTTP proxy by setting an environment variable. In DNS proxy mode , an egress router pod runs as a DNS proxy for TCP-based services from its own IP address to one or more destination IP addresses. To make use of the reserved, source IP address, client pods must be modified to connect to the egress router pod rather than connecting directly to the destination IP address. This modification ensures that external destinations treat traffic as though it were coming from a known source. Redirect mode works for all services except for HTTP and HTTPS. For HTTP and HTTPS services, use HTTP proxy mode. For TCP-based services with IP addresses or domain names, use DNS proxy mode. 26.9.1.2. Egress router pod implementation The egress router pod setup is performed by an initialization container. That container runs in a privileged context so that it can configure the macvlan interface and set up iptables rules. After the initialization container finishes setting up the iptables rules, it exits. the egress router pod executes the container to handle the egress router traffic. The image used varies depending on the egress router mode. The environment variables determine which addresses the egress-router image uses. The image configures the macvlan interface to use EGRESS_SOURCE as its IP address, with EGRESS_GATEWAY as the IP address for the gateway. Network Address Translation (NAT) rules are set up so that connections to the cluster IP address of the pod on any TCP or UDP port are redirected to the same port on IP address specified by the EGRESS_DESTINATION variable. If only some of the nodes in your cluster are capable of claiming the specified source IP address and using the specified gateway, you can specify a nodeName or nodeSelector to identify which nodes are acceptable. 26.9.1.3. Deployment considerations An egress router pod adds an additional IP address and MAC address to the primary network interface of the node. As a result, you might need to configure your hypervisor or cloud provider to allow the additional address. Red Hat OpenStack Platform (RHOSP) If you deploy OpenShift Container Platform on RHOSP, you must allow traffic from the IP and MAC addresses of the egress router pod on your OpenStack environment. If you do not allow the traffic, then communication will fail : USD openstack port set --allowed-address \ ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid> Red Hat Virtualization (RHV) If you are using RHV , you must select No Network Filter for the Virtual network interface controller (vNIC). VMware vSphere If you are using VMware vSphere, see the VMware documentation for securing vSphere standard switches . View and change VMware vSphere default settings by selecting the host virtual switch from the vSphere Web Client. Specifically, ensure that the following are enabled: MAC Address Changes Forged Transits Promiscuous Mode Operation 26.9.1.4. Failover configuration To avoid downtime, you can deploy an egress router pod with a Deployment resource, as in the following example. To create a new Service object for the example deployment, use the oc expose deployment/egress-demo-controller command. apiVersion: apps/v1 kind: Deployment metadata: name: egress-demo-controller spec: replicas: 1 1 selector: matchLabels: name: egress-router template: metadata: name: egress-router labels: name: egress-router annotations: pod.network.openshift.io/assign-macvlan: "true" spec: 2 initContainers: ... containers: ... 1 Ensure that replicas is set to 1 , because only one pod can use a given egress source IP address at any time. This means that only a single copy of the router runs on a node. 2 Specify the Pod object template for the egress router pod. 26.9.2. Additional resources Deploying an egress router in redirection mode Deploying an egress router in HTTP proxy mode Deploying an egress router in DNS proxy mode 26.10. Deploying an egress router pod in redirect mode As a cluster administrator, you can deploy an egress router pod that is configured to redirect traffic to specified destination IP addresses. 26.10.1. Egress router pod specification for redirect mode Define the configuration for an egress router pod in the Pod object. The following YAML describes the fields for the configuration of an egress router pod in redirect mode: apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress_router> - name: EGRESS_GATEWAY 3 value: <egress_gateway> - name: EGRESS_DESTINATION 4 value: <egress_destination> - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod 1 The annotation tells OpenShift Container Platform to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod's network namespace. You must include the quotation marks around the "true" value. To have OpenShift Container Platform create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example, eth1 . 2 IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the /24 suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with the EGRESS_GATEWAY variable and no other hosts on the subnet. 3 Same value as the default gateway used by the node. 4 External server to direct traffic to. Using this example, connections to the pod are redirected to 203.0.113.25 , with a source IP address of 192.168.12.99 . Example egress router pod specification apiVersion: v1 kind: Pod metadata: name: egress-multi labels: name: egress-multi annotations: pod.network.openshift.io/assign-macvlan: "true" spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE value: 192.168.12.99/24 - name: EGRESS_GATEWAY value: 192.168.12.1 - name: EGRESS_DESTINATION value: | 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27 - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod 26.10.2. Egress destination configuration format When an egress router pod is deployed in redirect mode, you can specify redirection rules by using one or more of the following formats: <port> <protocol> <ip_address> - Incoming connections to the given <port> should be redirected to the same port on the given <ip_address> . <protocol> is either tcp or udp . <port> <protocol> <ip_address> <remote_port> - As above, except that the connection is redirected to a different <remote_port> on <ip_address> . <ip_address> - If the last line is a single IP address, then any connections on any other port will be redirected to the corresponding port on that IP address. If there is no fallback IP address then connections on other ports are rejected. In the example that follows several rules are defined: The first line redirects traffic from local port 80 to port 80 on 203.0.113.25 . The second and third lines redirect local ports 8080 and 8443 to remote ports 80 and 443 on 203.0.113.26 . The last line matches traffic for any ports not specified in the rules. Example configuration 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27 26.10.3. Deploying an egress router pod in redirect mode In redirect mode , an egress router pod sets up iptables rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be configured to access the service for the egress router rather than connecting directly to the destination IP. You can access the destination service and port from the application pod by using the curl command. For example: USD curl <router_service_IP> <port> Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an egress router pod. To ensure that other pods can find the IP address of the egress router pod, create a service to point to the egress router pod, as in the following example: apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http port: 80 - name: https port: 443 type: ClusterIP selector: name: egress-1 Your pods can now connect to this service. Their connections are redirected to the corresponding ports on the external server, using the reserved egress IP address. 26.10.4. Additional resources Configuring an egress router destination mappings with a ConfigMap 26.11. Deploying an egress router pod in HTTP proxy mode As a cluster administrator, you can deploy an egress router pod configured to proxy traffic to specified HTTP and HTTPS-based services. 26.11.1. Egress router pod specification for HTTP mode Define the configuration for an egress router pod in the Pod object. The following YAML describes the fields for the configuration of an egress router pod in HTTP mode: apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: http-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-http-proxy env: - name: EGRESS_HTTP_PROXY_DESTINATION 4 value: |- ... ... 1 The annotation tells OpenShift Container Platform to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod's network namespace. You must include the quotation marks around the "true" value. To have OpenShift Container Platform create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example, eth1 . 2 IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the /24 suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with the EGRESS_GATEWAY variable and no other hosts on the subnet. 3 Same value as the default gateway used by the node. 4 A string or YAML multi-line string specifying how to configure the proxy. Note that this is specified as an environment variable in the HTTP proxy container, not with the other environment variables in the init container. 26.11.2. Egress destination configuration format When an egress router pod is deployed in HTTP proxy mode, you can specify redirection rules by using one or more of the following formats. Each line in the configuration specifies one group of connections to allow or deny: An IP address allows connections to that IP address, such as 192.168.1.1 . A CIDR range allows connections to that CIDR range, such as 192.168.1.0/24 . A hostname allows proxying to that host, such as www.example.com . A domain name preceded by *. allows proxying to that domain and all of its subdomains, such as *.example.com . A ! followed by any of the match expressions denies the connection instead. If the last line is * , then anything that is not explicitly denied is allowed. Otherwise, anything that is not allowed is denied. You can also use * to allow connections to all remote destinations. Example configuration !*.example.com !192.168.1.0/24 192.168.2.1 * 26.11.3. Deploying an egress router pod in HTTP proxy mode In HTTP proxy mode , an egress router pod runs as an HTTP proxy on port 8080 . This mode only works for clients that are connecting to HTTP-based or HTTPS-based services, but usually requires fewer changes to the client pods to get them to work. Many programs can be told to use an HTTP proxy by setting an environment variable. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an egress router pod. To ensure that other pods can find the IP address of the egress router pod, create a service to point to the egress router pod, as in the following example: apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http-proxy port: 8080 1 type: ClusterIP selector: name: egress-1 1 Ensure the http port is set to 8080 . To configure the client pod (not the egress proxy pod) to use the HTTP proxy, set the http_proxy or https_proxy variables: apiVersion: v1 kind: Pod metadata: name: app-1 labels: name: app-1 spec: containers: env: - name: http_proxy value: http://egress-1:8080/ 1 - name: https_proxy value: http://egress-1:8080/ ... 1 The service created in the step. Note Using the http_proxy and https_proxy environment variables is not necessary for all setups. If the above does not create a working setup, then consult the documentation for the tool or software you are running in the pod. 26.11.4. Additional resources Configuring an egress router destination mappings with a ConfigMap 26.12. Deploying an egress router pod in DNS proxy mode As a cluster administrator, you can deploy an egress router pod configured to proxy traffic to specified DNS names and IP addresses. 26.12.1. Egress router pod specification for DNS mode Define the configuration for an egress router pod in the Pod object. The following YAML describes the fields for the configuration of an egress router pod in DNS mode: apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: dns-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-dns-proxy securityContext: privileged: true env: - name: EGRESS_DNS_PROXY_DESTINATION 4 value: |- ... - name: EGRESS_DNS_PROXY_DEBUG 5 value: "1" ... 1 The annotation tells OpenShift Container Platform to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod's network namespace. You must include the quotation marks around the "true" value. To have OpenShift Container Platform create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example, eth1 . 2 IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the /24 suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with the EGRESS_GATEWAY variable and no other hosts on the subnet. 3 Same value as the default gateway used by the node. 4 Specify a list of one or more proxy destinations. 5 Optional: Specify to output the DNS proxy log output to stdout . 26.12.2. Egress destination configuration format When the router is deployed in DNS proxy mode, you specify a list of port and destination mappings. A destination may be either an IP address or a DNS name. An egress router pod supports the following formats for specifying port and destination mappings: Port and remote address You can specify a source port and a destination host by using the two field format: <port> <remote_address> . The host can be an IP address or a DNS name. If a DNS name is provided, DNS resolution occurs at runtime. For a given host, the proxy connects to the specified source port on the destination host when connecting to the destination host IP address. Port and remote address pair example 80 172.16.12.11 100 example.com Port, remote address, and remote port You can specify a source port, a destination host, and a destination port by using the three field format: <port> <remote_address> <remote_port> . The three field format behaves identically to the two field version, with the exception that the destination port can be different than the source port. Port, remote address, and remote port example 8080 192.168.60.252 80 8443 web.example.com 443 26.12.3. Deploying an egress router pod in DNS proxy mode In DNS proxy mode , an egress router pod acts as a DNS proxy for TCP-based services from its own IP address to one or more destination IP addresses. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an egress router pod. Create a service for the egress router pod: Create a file named egress-router-service.yaml that contains the following YAML. Set spec.ports to the list of ports that you defined previously for the EGRESS_DNS_PROXY_DESTINATION environment variable. apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: ... type: ClusterIP selector: name: egress-dns-proxy For example: apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: - name: con1 protocol: TCP port: 80 targetPort: 80 - name: con2 protocol: TCP port: 100 targetPort: 100 type: ClusterIP selector: name: egress-dns-proxy To create the service, enter the following command: USD oc create -f egress-router-service.yaml Pods can now connect to this service. The connections are proxied to the corresponding ports on the external server, using the reserved egress IP address. 26.12.4. Additional resources Configuring an egress router destination mappings with a ConfigMap 26.13. Configuring an egress router pod destination list from a config map As a cluster administrator, you can define a ConfigMap object that specifies destination mappings for an egress router pod. The specific format of the configuration depends on the type of egress router pod. For details on the format, refer to the documentation for the specific egress router pod. 26.13.1. Configuring an egress router destination mappings with a config map For a large or frequently-changing set of destination mappings, you can use a config map to externally maintain the list. An advantage of this approach is that permission to edit the config map can be delegated to users without cluster-admin privileges. Because the egress router pod requires a privileged container, it is not possible for users without cluster-admin privileges to edit the pod definition directly. Note The egress router pod does not automatically update when the config map changes. You must restart the egress router pod to get updates. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a file containing the mapping data for the egress router pod, as in the following example: You can put blank lines and comments into this file. Create a ConfigMap object from the file: USD oc delete configmap egress-routes --ignore-not-found USD oc create configmap egress-routes \ --from-file=destination=my-egress-destination.txt In the command, the egress-routes value is the name of the ConfigMap object to create and my-egress-destination.txt is the name of the file that the data is read from. Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: egress-routes data: destination: | # Egress routes for Project "Test", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 # Fallback 203.0.113.27 Create an egress router pod definition and specify the configMapKeyRef stanza for the EGRESS_DESTINATION field in the environment stanza: ... env: - name: EGRESS_DESTINATION valueFrom: configMapKeyRef: name: egress-routes key: destination ... 26.13.2. Additional resources Redirect mode HTTP proxy mode DNS proxy mode 26.14. Enabling multicast for a project 26.14.1. About multicast With IP multicast, data is broadcast to many IP addresses simultaneously. Important At this time, multicast is best used for low-bandwidth coordination or service discovery and not a high-bandwidth solution. By default, network policies affect all connections in a namespace. However, multicast is unaffected by network policies. If multicast is enabled in the same namespace as your network policies, it is always allowed, even if there is a deny-all network policy. Cluster administrators should consider the implications to the exemption of multicast from network policies before enabling it. Multicast traffic between OpenShift Container Platform pods is disabled by default. If you are using the OpenShift SDN network plugin, you can enable multicast on a per-project basis. When using the OpenShift SDN network plugin in networkpolicy isolation mode: Multicast packets sent by a pod will be delivered to all other pods in the project, regardless of NetworkPolicy objects. Pods might be able to communicate over multicast even when they cannot communicate over unicast. Multicast packets sent by a pod in one project will never be delivered to pods in any other project, even if there are NetworkPolicy objects that allow communication between the projects. When using the OpenShift SDN network plugin in multitenant isolation mode: Multicast packets sent by a pod will be delivered to all other pods in the project. Multicast packets sent by a pod in one project will be delivered to pods in other projects only if each project is joined together and multicast is enabled in each joined project. 26.14.2. Enabling multicast between pods You can enable multicast between pods for your project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Run the following command to enable multicast for a project. Replace <namespace> with the namespace for the project you want to enable multicast for. USD oc annotate netnamespace <namespace> \ netnamespace.network.openshift.io/multicast-enabled=true Verification To verify that multicast is enabled for a project, complete the following procedure: Change your current project to the project that you enabled multicast for. Replace <project> with the project name. USD oc project <project> Create a pod to act as a multicast receiver: USD cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi9 command: ["/bin/sh", "-c"] args: ["dnf -y install socat hostname && sleep inf"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF Create a pod to act as a multicast sender: USD cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi9 command: ["/bin/sh", "-c"] args: ["dnf -y install socat && sleep inf"] EOF In a new terminal window or tab, start the multicast listener. Get the IP address for the Pod: USD POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}') Start the multicast listener by entering the following command: USD oc exec mlistener -i -t -- \ socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname Start the multicast transmitter. Get the pod network IP address range: USD CIDR=USD(oc get Network.config.openshift.io cluster \ -o jsonpath='{.status.clusterNetwork[0].cidr}') To send a multicast message, enter the following command: USD oc exec msender -i -t -- \ /bin/bash -c "echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64" If multicast is working, the command returns the following output: mlistener 26.15. Disabling multicast for a project 26.15.1. Disabling multicast between pods You can disable multicast between pods for your project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Disable multicast by running the following command: USD oc annotate netnamespace <namespace> \ 1 netnamespace.network.openshift.io/multicast-enabled- 1 The namespace for the project you want to disable multicast for. 26.16. Configuring network isolation using OpenShift SDN When your cluster is configured to use the multitenant isolation mode for the OpenShift SDN network plugin, each project is isolated by default. Network traffic is not allowed between pods or services in different projects in multitenant isolation mode. You can change the behavior of multitenant isolation for a project in two ways: You can join one or more projects, allowing network traffic between pods and services in different projects. You can disable network isolation for a project. It will be globally accessible, accepting network traffic from pods and services in all other projects. A globally accessible project can access pods and services in all other projects. 26.16.1. Prerequisites You must have a cluster configured to use the OpenShift SDN network plugin in multitenant isolation mode. 26.16.2. Joining projects You can join two or more projects to allow network traffic between pods and services in different projects. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Use the following command to join projects to an existing project network: USD oc adm pod-network join-projects --to=<project1> <project2> <project3> Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option to specify projects based upon an associated label. Optional: Run the following command to view the pod networks that you have joined together: USD oc get netnamespaces Projects in the same pod-network have the same network ID in the NETID column. 26.16.3. Isolating a project You can isolate a project so that pods and services in other projects cannot access its pods and services. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure To isolate the projects in the cluster, run the following command: USD oc adm pod-network isolate-projects <project1> <project2> Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option to specify projects based upon an associated label. 26.16.4. Disabling network isolation for a project You can disable network isolation for a project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Run the following command for the project: USD oc adm pod-network make-projects-global <project1> <project2> Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option to specify projects based upon an associated label. 26.17. Configuring kube-proxy The Kubernetes network proxy (kube-proxy) runs on each node and is managed by the Cluster Network Operator (CNO). kube-proxy maintains network rules for forwarding connections for endpoints associated with services. 26.17.1. About iptables rules synchronization The synchronization period determines how frequently the Kubernetes network proxy (kube-proxy) syncs the iptables rules on a node. A sync begins when either of the following events occurs: An event occurs, such as service or endpoint is added to or removed from the cluster. The time since the last sync exceeds the sync period defined for kube-proxy. 26.17.2. kube-proxy configuration parameters You can modify the following kubeProxyConfig parameters. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. Table 26.3. Parameters Parameter Description Values Default iptablesSyncPeriod The refresh period for iptables rules. A time interval, such as 30s or 2m . Valid suffixes include s , m , and h and are described in the Go time package documentation. 30s proxyArguments.iptables-min-sync-period The minimum duration before refreshing iptables rules. This parameter ensures that the refresh does not happen too frequently. By default, a refresh starts as soon as a change that affects iptables rules occurs. A time interval, such as 30s or 2m . Valid suffixes include s , m , and h and are described in the Go time package 0s 26.17.3. Modifying the kube-proxy configuration You can modify the Kubernetes network proxy configuration for your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to a running cluster with the cluster-admin role. Procedure Edit the Network.operator.openshift.io custom resource (CR) by running the following command: USD oc edit network.operator.openshift.io cluster Modify the kubeProxyConfig parameter in the CR with your changes to the kube-proxy configuration, such as in the following example CR: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: ["30s"] Save the file and exit the text editor. The syntax is validated by the oc command when you save the file and exit the editor. If your modifications contain a syntax error, the editor opens the file and displays an error message. Enter the following command to confirm the configuration update: USD oc get networks.operator.openshift.io -o yaml Example output apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OpenShiftSDN kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: - 30s serviceNetwork: - 172.30.0.0/16 status: {} kind: List Optional: Enter the following command to confirm that the Cluster Network Operator accepted the configuration change: USD oc get clusteroperator network Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.1.0-0.9 True False False 1m The AVAILABLE field is True when the configuration update is applied successfully.
|
[
"oc patch MachineConfigPool master --type='merge' --patch '{ \"spec\": { \"paused\": true } }'",
"oc patch MachineConfigPool worker --type='merge' --patch '{ \"spec\":{ \"paused\": true } }'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'",
"oc get Network.config cluster -o jsonpath='{.status.migration}'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OpenShiftSDN\" } } }'",
"oc get Network.config cluster -o jsonpath='{.status.migration.networkType}'",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"networkType\": \"OpenShiftSDN\" } }'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OpenShiftSDN\", \"features\": { \"egressIP\": <bool>, \"egressFirewall\": <bool>, \"multicast\": <bool> } } } }'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"openshiftSDNConfig\":{ \"mtu\":<mtu>, \"vxlanPort\":<port> }}}}'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"openshiftSDNConfig\":{ \"mtu\":1200 }}}}'",
"#!/bin/bash readarray -t POD_NODES <<< \"USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1\" \"USD7}')\" for i in \"USD{POD_NODES[@]}\" do read -r POD NODE <<< \"USDi\" until oc rsh -n openshift-machine-config-operator \"USDPOD\" chroot /rootfs shutdown -r +1 do echo \"cannot reboot node USDNODE, retry\" && sleep 3 done done",
"#!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type==\"InternalIP\")].address}') do echo \"reboot node USDip\" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done",
"oc -n openshift-multus rollout status daemonset/multus",
"Waiting for daemon set \"multus\" rollout to finish: 1 out of 6 new pods have been updated Waiting for daemon set \"multus\" rollout to finish: 5 of 6 updated pods are available daemon set \"multus\" successfully rolled out",
"oc patch MachineConfigPool master --type='merge' --patch '{ \"spec\": { \"paused\": false } }'",
"oc patch MachineConfigPool worker --type='merge' --patch '{ \"spec\": { \"paused\": false } }'",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml",
"oc get Network.config/cluster -o jsonpath='{.status.networkType}{\"\\n\"}'",
"oc get nodes",
"oc get pod -n openshift-machine-config-operator",
"NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h",
"oc logs <pod> -n openshift-machine-config-operator",
"oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"defaultNetwork\": { \"ovnKubernetesConfig\":null } } }'",
"oc delete namespace openshift-ovn-kubernetes",
"oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yaml",
"#!/bin/bash if [ -n \"USDOVN_SDN_MIGRATION_TIMEOUT\" ] && [ \"USDOVN_SDN_MIGRATION_TIMEOUT\" = \"0s\" ]; then unset OVN_SDN_MIGRATION_TIMEOUT fi #loops the timeout command of the script to repeatedly check the cluster Operators until all are available. co_timeout=USD{OVN_SDN_MIGRATION_TIMEOUT:-1200s} timeout \"USDco_timeout\" bash <<EOT until oc wait co --all --for='condition=AVAILABLE=True' --timeout=10s && oc wait co --all --for='condition=PROGRESSING=False' --timeout=10s && oc wait co --all --for='condition=DEGRADED=False' --timeout=10s; do sleep 10 echo \"Some ClusterOperators Degraded=False,Progressing=True,or Available=False\"; done EOT",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{\"spec\":{\"migration\":null}}'",
"oc get nncp",
"NAME STATUS REASON bondmaster0 Available SuccessfullyConfigured",
"oc delete nncp <nncp_manifest_filename>",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OVNKubernetes\" } } }'",
"oc get mcp",
"oc get co",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OVNKubernetes\", \"features\": { \"egressIP\": <bool>, \"egressFirewall\": <bool>, \"multicast\": <bool> } } } }'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"mtu\":<mtu>, \"genevePort\":<port>, \"v4InternalSubnet\":\"<ipv4_subnet>\" }}}}'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"mtu\":1200 }}}}'",
"oc get mcp",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml | grep ExecStart",
"ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes",
"oc get pod -n openshift-machine-config-operator",
"NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h",
"oc logs <pod> -n openshift-machine-config-operator",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"networkType\": \"OVNKubernetes\" } }'",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"clusterNetwork\": [ { \"cidr\": \"<cidr>\", \"hostPrefix\": <prefix> } ], \"networkType\": \"OVNKubernetes\" } }'",
"oc -n openshift-multus rollout status daemonset/multus",
"Waiting for daemon set \"multus\" rollout to finish: 1 out of 6 new pods have been updated Waiting for daemon set \"multus\" rollout to finish: 5 of 6 updated pods are available daemon set \"multus\" successfully rolled out",
"#!/bin/bash readarray -t POD_NODES <<< \"USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1\" \"USD7}')\" for i in \"USD{POD_NODES[@]}\" do read -r POD NODE <<< \"USDi\" until oc rsh -n openshift-machine-config-operator \"USDPOD\" chroot /rootfs shutdown -r +1 do echo \"cannot reboot node USDNODE, retry\" && sleep 3 done done",
"#!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type==\"InternalIP\")].address}') do echo \"reboot node USDip\" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done",
"oc get network.config/cluster -o jsonpath='{.status.networkType}{\"\\n\"}'",
"oc get nodes",
"oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'",
"oc get co",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"defaultNetwork\": { \"openshiftSDNConfig\": null } } }'",
"oc delete namespace openshift-sdn",
"IP capacity = public cloud default capacity - sum(current IP assignments)",
"cloud.network.openshift.io/egress-ipconfig: [ { \"interface\":\"eni-078d267045138e436\", \"ifaddr\":{\"ipv4\":\"10.0.128.0/18\"}, \"capacity\":{\"ipv4\":14,\"ipv6\":15} } ]",
"cloud.network.openshift.io/egress-ipconfig: [ { \"interface\":\"nic0\", \"ifaddr\":{\"ipv4\":\"10.0.128.0/18\"}, \"capacity\":{\"ip\":14} } ]",
"oc patch netnamespace <project_name> --type=merge -p '{ \"egressIPs\": [ \"<ip_address>\" ] }'",
"oc patch netnamespace project1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\"]}' oc patch netnamespace project2 --type=merge -p '{\"egressIPs\": [\"192.168.1.101\"]}'",
"oc patch hostsubnet <node_name> --type=merge -p '{ \"egressCIDRs\": [ \"<ip_address_range>\", \"<ip_address_range>\" ] }'",
"oc patch hostsubnet node1 --type=merge -p '{\"egressCIDRs\": [\"192.168.1.0/24\"]}' oc patch hostsubnet node2 --type=merge -p '{\"egressCIDRs\": [\"192.168.1.0/24\"]}'",
"oc patch netnamespace <project_name> --type=merge -p '{ \"egressIPs\": [ \"<ip_address>\" ] }'",
"oc patch netnamespace project1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\",\"192.168.1.101\"]}'",
"oc patch hostsubnet <node_name> --type=merge -p '{ \"egressIPs\": [ \"<ip_address>\", \"<ip_address>\" ] }'",
"oc patch hostsubnet node1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\", \"192.168.1.101\", \"192.168.1.102\"]}'",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow - to: cidrSelector: 0.0.0.0/0 3 type: Deny",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: <name> 1 spec: egress: 2",
"egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Allow to: dnsName: www.example.com - type: Deny to: cidrSelector: 0.0.0.0/0",
"oc create -f <policy_name>.yaml -n <project>",
"oc create -f default.yaml -n project1",
"egressnetworkpolicy.network.openshift.io/v1 created",
"oc get egressnetworkpolicy --all-namespaces",
"oc describe egressnetworkpolicy <policy_name>",
"Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0",
"oc get -n <project> egressnetworkpolicy",
"oc get -n <project> egressnetworkpolicy <name> -o yaml > <filename>.yaml",
"oc replace -f <filename>.yaml",
"oc get -n <project> egressnetworkpolicy",
"oc delete -n <project> egressnetworkpolicy <name>",
"curl <router_service_IP> <port>",
"openstack port set --allowed-address ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid>",
"apiVersion: apps/v1 kind: Deployment metadata: name: egress-demo-controller spec: replicas: 1 1 selector: matchLabels: name: egress-router template: metadata: name: egress-router labels: name: egress-router annotations: pod.network.openshift.io/assign-macvlan: \"true\" spec: 2 initContainers: containers:",
"apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress_router> - name: EGRESS_GATEWAY 3 value: <egress_gateway> - name: EGRESS_DESTINATION 4 value: <egress_destination> - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod",
"apiVersion: v1 kind: Pod metadata: name: egress-multi labels: name: egress-multi annotations: pod.network.openshift.io/assign-macvlan: \"true\" spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE value: 192.168.12.99/24 - name: EGRESS_GATEWAY value: 192.168.12.1 - name: EGRESS_DESTINATION value: | 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27 - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod",
"80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27",
"curl <router_service_IP> <port>",
"apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http port: 80 - name: https port: 443 type: ClusterIP selector: name: egress-1",
"apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: http-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-http-proxy env: - name: EGRESS_HTTP_PROXY_DESTINATION 4 value: |-",
"!*.example.com !192.168.1.0/24 192.168.2.1 *",
"apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http-proxy port: 8080 1 type: ClusterIP selector: name: egress-1",
"apiVersion: v1 kind: Pod metadata: name: app-1 labels: name: app-1 spec: containers: env: - name: http_proxy value: http://egress-1:8080/ 1 - name: https_proxy value: http://egress-1:8080/",
"apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: dns-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-dns-proxy securityContext: privileged: true env: - name: EGRESS_DNS_PROXY_DESTINATION 4 value: |- - name: EGRESS_DNS_PROXY_DEBUG 5 value: \"1\"",
"80 172.16.12.11 100 example.com",
"8080 192.168.60.252 80 8443 web.example.com 443",
"apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: type: ClusterIP selector: name: egress-dns-proxy",
"apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: - name: con1 protocol: TCP port: 80 targetPort: 80 - name: con2 protocol: TCP port: 100 targetPort: 100 type: ClusterIP selector: name: egress-dns-proxy",
"oc create -f egress-router-service.yaml",
"Egress routes for Project \"Test\", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 Fallback 203.0.113.27",
"oc delete configmap egress-routes --ignore-not-found",
"oc create configmap egress-routes --from-file=destination=my-egress-destination.txt",
"apiVersion: v1 kind: ConfigMap metadata: name: egress-routes data: destination: | # Egress routes for Project \"Test\", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 # Fallback 203.0.113.27",
"env: - name: EGRESS_DESTINATION valueFrom: configMapKeyRef: name: egress-routes key: destination",
"oc annotate netnamespace <namespace> netnamespace.network.openshift.io/multicast-enabled=true",
"oc project <project>",
"cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi9 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat hostname && sleep inf\"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF",
"cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi9 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat && sleep inf\"] EOF",
"POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}')",
"oc exec mlistener -i -t -- socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname",
"CIDR=USD(oc get Network.config.openshift.io cluster -o jsonpath='{.status.clusterNetwork[0].cidr}')",
"oc exec msender -i -t -- /bin/bash -c \"echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64\"",
"mlistener",
"oc annotate netnamespace <namespace> \\ 1 netnamespace.network.openshift.io/multicast-enabled-",
"oc adm pod-network join-projects --to=<project1> <project2> <project3>",
"oc get netnamespaces",
"oc adm pod-network isolate-projects <project1> <project2>",
"oc adm pod-network make-projects-global <project1> <project2>",
"oc edit network.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: [\"30s\"]",
"oc get networks.operator.openshift.io -o yaml",
"apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OpenShiftSDN kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: - 30s serviceNetwork: - 172.30.0.0/16 status: {} kind: List",
"oc get clusteroperator network",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.1.0-0.9 True False False 1m"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/networking/openshift-sdn-network-plugin
|
function::proc_mem_data
|
function::proc_mem_data Name function::proc_mem_data - Program data size (data + stack) in pages Synopsis Arguments None Description Returns the current process data size (data + stack) in pages, or zero when there is no current process or the number of pages couldn't be retrieved.
|
[
"function proc_mem_data:long()"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-proc-mem-data
|
Chapter 3. Migrating from Internal Satellite Databases to External Databases
|
Chapter 3. Migrating from Internal Satellite Databases to External Databases When you install Red Hat Satellite, the satellite-installer command installs PostgreSQL databases on the same server as Satellite. If you are using the default internal databases but want to start using external databases to help with the server load, you can migrate your internal databases to external databases. To confirm whether your Satellite Server has internal or external databases, you can query the status of your databases: For PostgreSQL, enter the following command: Red Hat does not provide support or tools for external database maintenance. This includes backups, upgrades, and database tuning. You must have your own database administrator to support and maintain external databases. To migrate from the default internal databases to external databases, you must complete the following procedures: Section 3.2, "Preparing a Host for External Databases" . Prepare a Red Hat Enterprise Linux 8 or Red Hat Enterprise Linux 7 server to host the external databases. Section 3.3, "Installing PostgreSQL" . Prepare PostgreSQL with databases for Satellite, Pulp and Candlepin with dedicated users owning them. Section 3.4, "Migrating to External Databases" . Edit the parameters of satellite-installer to point to the new databases, and run satellite-installer . 3.1. PostgreSQL as an External Database Considerations Foreman, Katello, and Candlepin use the PostgreSQL database. If you want to use PostgreSQL as an external database, the following information can help you decide if this option is right for your Satellite configuration. Satellite supports PostgreSQL version 12. Advantages of External PostgreSQL: Increase in free memory and free CPU on Satellite Flexibility to set shared_buffers on the PostgreSQL database to a high number without the risk of interfering with other services on Satellite Flexibility to tune the PostgreSQL server's system without adversely affecting Satellite operations Disadvantages of External PostgreSQL Increase in deployment complexity that can make troubleshooting more difficult The external PostgreSQL server is an additional system to patch and maintain If either Satellite or the PostgreSQL database server suffers a hardware or storage failure, Satellite is not operational If there is latency between the Satellite server and database server, performance can suffer If you suspect that the PostgreSQL database on your Satellite is causing performance problems, use the information in Satellite 6: How to enable postgres query logging to detect slow running queries to determine if you have slow queries. Queries that take longer than one second are typically caused by performance issues with large installations, and moving to an external database might not help. If you have slow queries, contact Red Hat Support. 3.2. Preparing a Host for External Databases Install a freshly provisioned system with the latest Red Hat Enterprise Linux 8 or Red Hat Enterprise Linux 7 server to host the external databases. Subscriptions for Red Hat Software Collections and Red Hat Enterprise Linux do not provide the correct service level agreement for using Satellite with external databases. You must also attach a Satellite subscription to the base operating system that you want to use for the external databases. Prerequisites The prepared host must meet Satellite's Storage Requirements . Procedure Use the instructions in Attaching the Satellite Infrastructure Subscription to attach a Satellite subscription to your server. Disable all repositories and enable only the following repositories: For Red Hat Enterprise Linux 7: For Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 8, enable the following modules: Note Enablement of the module satellite:el8 warns about a conflict with postgresql:10 and ruby:2.5 as these modules are set to the default module versions on Red Hat Enterprise Linux 8. The module satellite:el8 has a dependency for the modules postgresql:12 and ruby:2.7 that will be enabled with the satellite:el8 module. These warnings do not cause installation process failure, hence can be ignored safely. For more information about modules and lifecycle streams on Red Hat Enterprise Linux 8, see Red Hat Enterprise Linux Application Streams Life Cycle . 3.3. Installing PostgreSQL You can install only the same version of PostgreSQL that is installed with the satellite-installer tool during an internal database installation. You can install PostgreSQL using Red Hat Enterprise Linux 8 or Red Hat Enterprise Linux Server 7 repositories. Satellite supports PostgreSQL version 12. Installing PostgreSQL on Red Hat Enterprise Linux 8 Installing PostgreSQL on Red Hat Enterprise Linux 7 3.3.1. Installing PostgreSQL on Red Hat Enterprise Linux 8 Procedure To install PostgreSQL, enter the following command: To initialize PostgreSQL, enter the following command: Edit the /var/lib/pgsql/data/postgresql.conf file: Remove the # and edit to listen to inbound connections: Edit the /var/lib/pgsql/data/pg_hba.conf file: Add the following line to the file: To start, and enable PostgreSQL service, enter the following commands: Open the postgresql port on the external PostgreSQL server: Switch to the postgres user and start the PostgreSQL client: Create three databases and dedicated roles: one for Satellite, one for Candlepin, and one for Pulp: Exit the postgres user: From Satellite Server, test that you can access the database. If the connection succeeds, the commands return 1 . 3.3.2. Installing PostgreSQL on Red Hat Enterprise Linux 7 Procedure To install PostgreSQL, enter the following command: To initialize PostgreSQL, enter the following command: Edit the /var/opt/rh/rh-postgresql12/lib/pgsql/data/postgresql.conf file: Remove the # and edit to listen to inbound connections: Edit the /var/opt/rh/rh-postgresql12/lib/pgsql/data/pg_hba.conf file: Add the following line to the file: To start, and enable PostgreSQL service, enter the following commands: Open the postgresql port on the external PostgreSQL server: Switch to the postgres user and start the PostgreSQL client: Create three databases and dedicated roles: one for Satellite, one for Candlepin, and one for Pulp: Exit the postgres user: From Satellite Server, test that you can access the database. If the connection succeeds, the commands return 1 . 3.4. Migrating to External Databases Back up and transfer existing data, then use the satellite-installer command to configure Satellite to connect to an external PostgreSQL database server. Prerequisites You have installed and configured a PostgreSQL server on a Red Hat Enterprise Linux server. Procedure On Satellite Server, stop Satellite services: Start the PostgreSQL services: Back up the internal databases: Transfer the data to the new external databases: Use the satellite-installer command to update Satellite to point to the new databases:
|
[
"satellite-maintain service status --only postgresql",
"subscription-manager repos --disable '*' subscription-manager repos --enable=rhel-server-rhscl-7-rpms --enable=rhel-7-server-rpms --enable=rhel-7-server-satellite-6.11-rpms",
"subscription-manager repos --disable '*' subscription-manager repos --enable=satellite-6.11-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"dnf module enable satellite:el8",
"dnf install postgresql-server postgresql-evr",
"postgresql-setup initdb",
"vi /var/lib/pgsql/data/postgresql.conf",
"listen_addresses = '*'",
"vi /var/lib/pgsql/data/pg_hba.conf",
"host all all Satellite_ip /24 md5",
"systemctl start postgresql systemctl enable postgresql",
"firewall-cmd --add-service=postgresql firewall-cmd --runtime-to-permanent",
"su - postgres -c psql",
"CREATE USER \"foreman\" WITH PASSWORD ' Foreman_Password '; CREATE USER \"candlepin\" WITH PASSWORD ' Candlepin_Password '; CREATE USER \"pulp\" WITH PASSWORD ' Pulpcore_Password '; CREATE DATABASE foreman OWNER foreman; CREATE DATABASE candlepin OWNER candlepin; CREATE DATABASE pulpcore OWNER pulp;",
"\\q",
"PGPASSWORD=' Foreman_Password ' psql -h postgres.example.com -p 5432 -U foreman -d foreman -c \"SELECT 1 as ping\" PGPASSWORD=' Candlepin_Password ' psql -h postgres.example.com -p 5432 -U candlepin -d candlepin -c \"SELECT 1 as ping\" PGPASSWORD=' Pulpcore_Password ' psql -h postgres.example.com -p 5432 -U pulp -d pulpcore -c \"SELECT 1 as ping\"",
"yum install rh-postgresql12-postgresql-server rh-postgresql12-syspaths rh-postgresql12-postgresql-evr",
"postgresql-setup initdb",
"vi /var/opt/rh/rh-postgresql12/lib/pgsql/data/postgresql.conf",
"listen_addresses = '*'",
"vi /var/opt/rh/rh-postgresql12/lib/pgsql/data/pg_hba.conf",
"host all all Satellite_ip /24 md5",
"systemctl start postgresql systemctl enable postgresql",
"firewall-cmd --add-service=postgresql firewall-cmd --runtime-to-permanent",
"su - postgres -c psql",
"CREATE USER \"foreman\" WITH PASSWORD ' Foreman_Password '; CREATE USER \"candlepin\" WITH PASSWORD ' Candlepin_Password '; CREATE USER \"pulp\" WITH PASSWORD ' Pulpcore_Password '; CREATE DATABASE foreman OWNER foreman; CREATE DATABASE candlepin OWNER candlepin; CREATE DATABASE pulpcore OWNER pulp;",
"\\q",
"PGPASSWORD=' Foreman_Password ' psql -h postgres.example.com -p 5432 -U foreman -d foreman -c \"SELECT 1 as ping\" PGPASSWORD=' Candlepin_Password ' psql -h postgres.example.com -p 5432 -U candlepin -d candlepin -c \"SELECT 1 as ping\" PGPASSWORD=' Pulpcore_Password ' psql -h postgres.example.com -p 5432 -U pulp -d pulpcore -c \"SELECT 1 as ping\"",
"satellite-maintain service stop",
"systemctl start postgresql",
"satellite-maintain backup online --skip-pulp-content --preserve-directory -y /var/migration_backup",
"PGPASSWORD=' Foreman_Password ' pg_restore -h postgres.example.com -U foreman -d foreman < /var/migration_backup/foreman.dump PGPASSWORD=' Candlepin_Password ' pg_restore -h postgres.example.com -U candlepin -d candlepin < /var/migration_backup/candlepin.dump PGPASSWORD=' Pulpcore_Password ' pg_restore -h postgres.example.com -U pulp -d pulpcore < /var/migration_backup/pulpcore.dump",
"satellite-installer --scenario satellite --foreman-db-host postgres.example.com --foreman-db-password Foreman_Password --foreman-db-database foreman --foreman-db-manage false --foreman-db-username foreman --katello-candlepin-db-host postgres.example.com --katello-candlepin-db-name candlepin --katello-candlepin-db-password Candlepin_Password --katello-candlepin-manage-db false --katello-candlepin-db-user candlepin --foreman-proxy-content-pulpcore-manage-postgresql false --foreman-proxy-content-pulpcore-postgresql-host postgres.example.com --foreman-proxy-content-pulpcore-postgresql-db-name pulpcore --foreman-proxy-content-pulpcore-postgresql-password Pulpcore_Password --foreman-proxy-content-pulpcore-postgresql-user pulp"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/administering_red_hat_satellite/Migrating_from_Internal_Databases_to_External_Databases_admin
|
22.3. JBoss Operations Network Server Installation
|
22.3. JBoss Operations Network Server Installation The core of JBoss Operations Network is the server, which communicates with agents, maintains the inventory, manages resource settings, interacts with content providers, and provides a central management UI. Note For more detailed information about configuring JBoss Operations Network, see the JBoss Operations Network Installation Guide . Report a bug
| null |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/jboss_operations_network_server_installation
|
Chapter 5. Managing workflow persistence
|
Chapter 5. Managing workflow persistence You can configure a SonataFlow instance to use persistence and store workflow context in a relational database. By design, Kubernetes pods are stateless. This behavior can pose challenges for workloads that need to maintain the application state across pod restarts. In the case of OpenShift Serverless Logic, the workflow context is lost when the pod restarts by default. To ensure workflow recovery in such scenarios, you must configure workflow runtime persistence. Use the SonataFlowPlatform custom resource (CR) or the SonataFlow CR to provide this configuration. The scope of the configuration varies depending on which resource you use. 5.1. Configuring persistence using the SonataFlowPlatform CR The SonataFlowPlatform custom resource (CR) enables persistence configuration at the namespace level. This approach applies the persistence settings automatically to all workflows deployed in the namespace. It simplifies resource configuration, especially when multiple workflows in the namespace belong to the same application. While this configuration is applied by default, individual workflows in the namespace can override it using the SonataFlow CR. The OpenShift Serverless Logic Operator also uses this configuration to set up persistence for supporting services. Note The persistence configurations are applied only at the time of workflow deployment. Changes to the SonataFlowPlatform CR do not affect workflows that are already deployed. Procedure Define the SonataFlowPlatform CR. Specify the persistence settings in the persistence field under the SonataFlowPlatform CR spec. apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example namespace: example-namespace spec: persistence: postgresql: serviceRef: name: postgres-example 1 namespace: postgres-example-namespace 2 databaseName: example-database 3 port: 1234 4 secretRef: name: postgres-secrets-example 5 userKey: POSTGRESQL_USER 6 passwordKey: POSTGRESQL_PASSWORD 7 1 Name of the Kubernetes Service connecting to the PostgreSQL database. 2 Optional: Namespace of the PostgreSQL Service. Defaults to the namespace of the SonataFlowPlatform . 3 Name of the PostgreSQL database for storing workflow data. 4 Optional: Port number to connect to the PostgreSQL service. Defaults to 5432 . 5 Name of the Kubernetes Secret containing database credentials. 6 Key in the Secret object that contains the database username. 7 Key in the Secret object that contains the database password. View the generated environment variables for the workflow. The following example shows the generated environment variables for a workflow named example-workflow deployed with the earlier SonataFlowPlatform configuration. These configurations specifically relate to persistence and are managed by the OpenShift Serverless Logic Operator. You cannot modify these settings once you have applied them. Note When you use the SonataFlowPlatform persistence, every workflow is configured to use a PostgreSQL schema name equal to the workflow name. env: - name: QUARKUS_DATASOURCE_USERNAME valueFrom: secretKeyRef: name: postgres-secrets-example key: POSTGRESQL_USER - name: QUARKUS_DATASOURCE_PASSWORD valueFrom: secretKeyRef: name: postgres-secrets-example key: POSTGRESQL_PASSWORD - name: QUARKUS_DATASOURCE_DB_KIND value: postgresql - name: QUARKUS_DATASOURCE_JDBC_URL value: >- jdbc:postgresql://postgres-example.postgres-example-namespace:1234/example-database?currentSchema=example-workflow - name: KOGITO_PERSISTENCE_TYPE value: jdbc When this persistence configuration is in place, the OpenShift Serverless Logic Operator configures every workflow deployed in this namespace using the preview or gitops profile, to connect with the PostgreSQL database by injecting relevant JDBC connection parameters as environment variables. Note PostgreSQL is currently the only supported database for persistence. For SonataFlow CR deployments using the preview profile, the OpenShift Serverless Logic build system automatically includes specific Quarkus extensions required for enabling persistence. This ensures compatibility with persistence mechanisms, streamlining the workflow deployment process. 5.2. Configuring persistence using the SonataFlow CR The SonataFlow custom resource (CR) enables workflow-specific persistence configuration. You can use this configuration independently, even if SonataFlowPlatform persistence is already set up in the current namespace. Procedure Configure persistence by using the persistence field in the SonataFlow CR specification as shown in the following example: apiVersion: sonataflow.org/v1alpha08 kind: SonataFlow metadata: name: example-workflow annotations: sonataflow.org/description: Example Workflow sonataflow.org/version: 0.0.1 spec: persistence: postgresql: serviceRef: name: postgres-example 1 namespace: postgres-example-namespace 2 databaseName: example-database 3 databaseSchema: example-schema 4 port: 1234 5 secretRef: name: postgres-secrets-example 6 userKey: POSTGRESQL_USER 7 passwordKey: POSTGRESQL_PASSWORD 8 flow: 1 Name of the Kubernetes Service that connects to the PostgreSQL database server. 2 Optional: Namespace containing the PostgreSQL Service. Defaults to the workflow namespace. 3 Name of the PostgreSQL database where workflow data is stored. 4 Optional: Name of the database schema for workflow data. Defaults to the workflow name. 5 Optional: Port to connect to the PostgreSQL Service. Defaults to 5432 . 6 Name of the Kubernetes Secret containing database credentials. 7 Key in the Secret object containing the database username. 8 Key in the Secret object containing the database password. This configuration informs the OpenShift Serverless Logic Operator that the workflow must connect to the specified PostgreSQL database server when deployed. The OpenShift Serverless Logic Operator adds the relevant JDBC connection parameters as environment variables to the workflow container. Note PostgreSQL is currently the only supported database for persistence. For SonataFlow CR deployments using the preview profile, the OpenShift Serverless Logic build system includes the required Quarkus extensions to enable persistence automatically. 5.3. Persistence configuration precedence rules You can use SonataFlow custom resource (CR) persistence independently or alongside SonataFlowPlatform CR persistence. If a SonataFlowPlatform CR persistence configuration exists in the current namespace, the following rules determine which persistence configuration applies: If the SonataFlow CR includes a persistence configuration, that configuration takes precedence and applies to the workflow. If the SonataFlow CR does not include a persistence configuration and the spec.persistence field is absent, the OpenShift Serverless Logic Operator uses the persistence configuration from the current SonataFlowPlatform if any. To disable persistence for the workflow, explicitly set spec.persistence: {} in the SonataFlow CR. This configuration ensures the workflow does not inherit persistence settings from the SonataFlowPlatform CR. 5.4. Profile specific persistence requirements The persistence configurations provided for both SonataFlowPlatform custom resource (CR) and SonataFlow CR apply equally to the preview and gitops profiles. However, you must avoid using these configurations with the dev profile, as this profile ignores them entirely. The primary difference between the preview and gitops profiles lies in the build process. When using the gitops profile, ensure that the following Quarkus extensions are included in the workflow image during the build process. groupId artifactId version io.quarkus quarkus-agroal 3.8.6.redhat-00004 io.quarkus quarkus-jdbc-postgresql 3.8.6.redhat-00004 org.kie kie-addons-quarkus-persistence-jdbc 9.102.0.redhat-00005 If you are using the registry.redhat.io/openshift-serverless-1/logic-swf-builder-rhel8:1.35.0 to generate your images, you can pass the following build argument to include these extensions: USD QUARKUS_EXTENSIONS=io.quarkus:quarkus-agroal:3.8.6.redhat-00004,io.quarkus:quarkus-jdbc-postgresql:3.8.6.redhat-00004,org.kie:kie-addons-quarkus-persistence-jdbc:9.102.0.redhat-00005 5.5. Database schema initialization When you are using SonataFlow with PostgreSQL persistence, you can initialize the database schema either by enabling Flyway or by manually applying database schema updates using Data Definition Language (DDL) scripts. Flyway is managed by the kie-addons-quarkus-flyway runtime module and it is disabled by default. To enable Flyway, you must configure it using one of the following methods: 5.5.1. Flyway configuration in the workflow ConfigMap To enable Flyway in the workflow ConfigMap , you can add the following property: Example of enabling Flyway in the workflow ConfigMap apiVersion: v1 kind: ConfigMap metadata: labels: app: example-workflow name: example-workflow-props data: application.properties: | kie.flyway.enabled = true 5.5.2. Flyway configuration using environment variables in the workflow container You can enable Flyway by adding an environment variable to the spec.podTemplate.container field in the SonataFlow CR by using the following example: Example of enabling Flyway by using the workflow container environment variable apiVersion: sonataflow.org/v1alpha08 kind: SonataFlow metadata: name: example-workflow annotations: sonataflow.org/description: Example Workflow sonataflow.org/version: 0.0.1 spec: podTemplate: container: env: - name: KIE_FLYWAY_ENABLED value: 'true' flow: ... 5.5.3. Flyway configuration using SonataFlowPlatform properties To apply a common Flyway configuration to all workflows within a namespace, you can add the property to the spec.properties.flow field of the SonataFlowPlatform CR shown in the following example: Note This configuration is applied during workflow deployment. Ensure the Flyway property is set before deploying workflows. Example of enabling Flyway by using the SonataFlowPlatform properties apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform spec: properties: flow: - name: kie.flyway.enabled value: true 5.5.4. Initializing a manual database using DDL scripts If you prefer manual initialization, you must disable Flyway by ensuring the kie.flyway.enabled property is either not configured or explicitly set to false . By default, each workflow uses a schema name equal to the workflow name. Ensure that you manually apply the schema initialization for each workflow. If you are using the SonataFlow custom resource (CR) persistence configuration, you can specify a custom schema name. Procedure Download the DDL scripts from the kogito-ddl-9.102.0.redhat-00005-db-scripts.zip location. Extract the files. Run the .sql files located in the root directory on the target PostgreSQL database. Ensure that the files are executed in the order of their version numbers. For example: V1.35.0__create_runtime_PostgreSQL.sql V10.0.0__add_business_key_PostgreSQL.sql V10.0.1__alter_correlation_PostgreSQL.sql Note The file version numbers are not associated with the OpenShift Serverless Logic Operator versioning.
|
[
"apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example namespace: example-namespace spec: persistence: postgresql: serviceRef: name: postgres-example 1 namespace: postgres-example-namespace 2 databaseName: example-database 3 port: 1234 4 secretRef: name: postgres-secrets-example 5 userKey: POSTGRESQL_USER 6 passwordKey: POSTGRESQL_PASSWORD 7",
"env: - name: QUARKUS_DATASOURCE_USERNAME valueFrom: secretKeyRef: name: postgres-secrets-example key: POSTGRESQL_USER - name: QUARKUS_DATASOURCE_PASSWORD valueFrom: secretKeyRef: name: postgres-secrets-example key: POSTGRESQL_PASSWORD - name: QUARKUS_DATASOURCE_DB_KIND value: postgresql - name: QUARKUS_DATASOURCE_JDBC_URL value: >- jdbc:postgresql://postgres-example.postgres-example-namespace:1234/example-database?currentSchema=example-workflow - name: KOGITO_PERSISTENCE_TYPE value: jdbc",
"apiVersion: sonataflow.org/v1alpha08 kind: SonataFlow metadata: name: example-workflow annotations: sonataflow.org/description: Example Workflow sonataflow.org/version: 0.0.1 spec: persistence: postgresql: serviceRef: name: postgres-example 1 namespace: postgres-example-namespace 2 databaseName: example-database 3 databaseSchema: example-schema 4 port: 1234 5 secretRef: name: postgres-secrets-example 6 userKey: POSTGRESQL_USER 7 passwordKey: POSTGRESQL_PASSWORD 8 flow:",
"QUARKUS_EXTENSIONS=io.quarkus:quarkus-agroal:3.8.6.redhat-00004,io.quarkus:quarkus-jdbc-postgresql:3.8.6.redhat-00004,org.kie:kie-addons-quarkus-persistence-jdbc:9.102.0.redhat-00005",
"apiVersion: v1 kind: ConfigMap metadata: labels: app: example-workflow name: example-workflow-props data: application.properties: | kie.flyway.enabled = true",
"apiVersion: sonataflow.org/v1alpha08 kind: SonataFlow metadata: name: example-workflow annotations: sonataflow.org/description: Example Workflow sonataflow.org/version: 0.0.1 spec: podTemplate: container: env: - name: KIE_FLYWAY_ENABLED value: 'true' flow:",
"apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform spec: properties: flow: - name: kie.flyway.enabled value: true"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/serverless_logic/serverless-logic-managing-persistence
|
Chapter 20. Atomic Host and Containers
|
Chapter 20. Atomic Host and Containers Red Hat Enterprise Linux Atomic Host Red Hat Enterprise Linux Atomic Host is a secure, lightweight, and minimal-footprint operating system optimized to run Linux containers.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/atomic_host_and_containers
|
Chapter 17. Recovering and restoring a system
|
Chapter 17. Recovering and restoring a system To recover and restore a system using an existing backup, Red Hat Enterprise Linux provides the Relax-and-Recover (ReaR) utility. You can use the utility as a disaster recovery solution and also for system migration. The utility enables you to perform the following tasks: Produce a bootable image and restore the system from an existing backup, using the image. Replicate the original storage layout. Restore user and system files. Restore the system to a different hardware. Additionally, for disaster recovery, you can also integrate certain backup software with ReaR. 17.1. Setting up ReaR and manually creating a backup Use the following steps to install the package for using the Relax-and-Recover (ReaR) utility, create a rescue system, configure and generate a backup. Prerequisites Necessary configurations as per the backup restore plan are ready. Note that you can use the NETFS backup method, a fully-integrated and built-in method with ReaR. Procedure Install the ReaR utility: Modify the ReaR configuration file in an editor of your choice, for example: Add the backup setting details to /etc/rear/local.conf . For example, in the case of the NETFS backup method, add the following lines: Replace backup.location by the URL of your backup location. To configure ReaR to keep the backup archive when the new one is created, also add the following line to the configuration file: To make the backups incremental, meaning that only the changed files are backed up on each run, add the following line: Create a rescue system: Create a backup as per the restore plan. For example, in the case of the NETFS backup method, run the following command: Alternatively, you can create the rescue system and the backup in a single step by running the following command: This command combines the functionality of the rear mkrescue and rear mkbackuponly commands. 17.2. Scheduling ReaR The /etc/cron.d/rear crontab file in the rear package runs the rear mkrescue command automatically at 1:30 AM everyday to schedule the Relax-and-Recover (ReaR) utility for regularly creating a rescue system. The command only creates a rescue system and not the backup of the data. You still need to schedule a periodic backup of data by yourself. For example: Procedure You can add another crontab that will schedule the rear mkbackuponly command. You can also change the existing crontab to run the rear mkbackup command instead of the default /usr/sbin/rear checklayout || /usr/sbin/rear mkrescure command. You can schedule an external backup, if an external backup method is in use. The details depend on the backup method that you are using in ReaR. Note The /etc/cron.d/rear crontab file provided in the rear package is considered deprecated, see Deprecated functionality shell and command line , because it is not sufficient by default to perform a backup. 17.3. Using a ReaR rescue image on the 64-bit IBM Z architecture Basic Relax and Recover (ReaR) functionality is now available on the 64-bit IBM Z architecture and is fully supported since RHEL 8.8. You can create a ReaR rescue image on IBM Z only in the z/VM environment. Backing up and recovering logical partitions (LPARs) has not been tested. Important ReaR on the 64-bit IBM Z architecture is supported only with the rear package version 2.6-9.el8 or later. Earlier versions are available as a Technology Preview feature only. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview . The only output method currently available is Initial Program Load (IPL). IPL produces a kernel and an initial RAM disk (initrd) that can be used with the zIPL boot loader. Prerequisites ReaR is installed. To install ReaR, run the yum install rear command Procedure Add the following variables to the /etc/rear/local.conf to configure ReaR for producing a rescue image on the 64-bit IBM Z architecture: To configure the IPL output method, add OUTPUT=IPL . To configure the backup method and destination, add BACKUP and BACKUP_URL variables. For example: Important The local backup storage is currently not supported on the 64-bit IBM Z architecture. Optional: You can also configure the OUTPUT_URL variable to save the kernel and initrd files. By default, the OUTPUT_URL is aligned with BACKUP_URL . To perform backup and rescue image creation: This creates the kernel and initrd files at the location specified by the BACKUP_URL or OUTPUT_URL (if set) variable, and a backup using the specified backup method. To recover the system, use the ReaR kernel and initrd files created in step 3, and boot from a Direct Attached Storage Device (DASD) or a Fibre Channel Protocol (FCP)-attached SCSI device prepared with the zipl boot loader, kernel, and initrd . For more information, see Using a Prepared DASD . When the rescue kernel and initrd get booted, it starts the ReaR rescue environment. Proceed with system recovery. Warning Currently, the rescue process reformats all the DASDs (Direct Attached Storage Devices) connected to the system. Do not attempt a system recovery if there is any valuable data present on the system storage devices. This also includes the device prepared with the zipl boot loader, ReaR kernel, and initrd that were used to boot into the rescue environment. Ensure to keep a copy. Additional resources Installing under z/VM Using a Prepared DASD .
|
[
"yum install rear",
"vi /etc/rear/local.conf",
"BACKUP=NETFS BACKUP_URL= backup.location",
"NETFS_KEEP_OLD_BACKUP_COPY=y",
"BACKUP_TYPE=incremental",
"rear mkrescue",
"rear mkbackuponly",
"rear mkbackup",
"BACKUP=NETFS BACKUP_URL=nfs:// <nfsserver name> / <share path>",
"rear mkbackup"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_basic_system_settings/assembly_recovering-and-restoring-a-system_configuring-basic-system-settings
|
20.4. Retrieving ACLs
|
20.4. Retrieving ACLs To determine the existing ACLs for a file or directory, use the getfacl command. In the example below, the getfacl is used to determine the existing ACLs for a file. Example 20.4. Retrieving ACLs The above command returns the following output: If a directory with a default ACL is specified, the default ACL is also displayed as illustrated below. For example, getfacl home/sales/ will display similar output:
|
[
"getfacl home/john/picture.png",
"file: home/john/picture.png owner: john group: john user::rw- group::r-- other::r--",
"file: home/sales/ owner: john group: john user::rw- user:barryg:r-- group::r-- mask::r-- other::r-- default:user::rwx default:user:john:rwx default:group::r-x default:mask::rwx default:other::r-x"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/acls-retrieving
|
Chapter 8. ResourceQuota [v1]
|
Chapter 8. ResourceQuota [v1] Description ResourceQuota sets aggregate quota restrictions enforced per namespace Type object 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ResourceQuotaSpec defines the desired hard limits to enforce for Quota. status object ResourceQuotaStatus defines the enforced hard limits and observed use. 8.1.1. .spec Description ResourceQuotaSpec defines the desired hard limits to enforce for Quota. Type object Property Type Description hard object (Quantity) hard is the set of desired hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ scopeSelector object A scope selector represents the AND of the selectors represented by the scoped-resource selector requirements. scopes array (string) A collection of filters that must match each object tracked by a quota. If not specified, the quota matches all objects. 8.1.2. .spec.scopeSelector Description A scope selector represents the AND of the selectors represented by the scoped-resource selector requirements. Type object Property Type Description matchExpressions array A list of scope selector requirements by scope of the resources. matchExpressions[] object A scoped-resource selector requirement is a selector that contains values, a scope name, and an operator that relates the scope name and values. 8.1.3. .spec.scopeSelector.matchExpressions Description A list of scope selector requirements by scope of the resources. Type array 8.1.4. .spec.scopeSelector.matchExpressions[] Description A scoped-resource selector requirement is a selector that contains values, a scope name, and an operator that relates the scope name and values. Type object Required scopeName operator Property Type Description operator string Represents a scope's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Possible enum values: - "DoesNotExist" - "Exists" - "In" - "NotIn" scopeName string The name of the scope that the selector applies to. Possible enum values: - "BestEffort" Match all pod objects that have best effort quality of service - "CrossNamespacePodAffinity" Match all pod objects that have cross-namespace pod (anti)affinity mentioned. - "NotBestEffort" Match all pod objects that do not have best effort quality of service - "NotTerminating" Match all pod objects where spec.activeDeadlineSeconds is nil - "PriorityClass" Match all pod objects that have priority class mentioned - "Terminating" Match all pod objects where spec.activeDeadlineSeconds >=0 values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.5. .status Description ResourceQuotaStatus defines the enforced hard limits and observed use. Type object Property Type Description hard object (Quantity) Hard is the set of enforced hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ used object (Quantity) Used is the current observed total usage of the resource in the namespace. 8.2. API endpoints The following API endpoints are available: /api/v1/resourcequotas GET : list or watch objects of kind ResourceQuota /api/v1/watch/resourcequotas GET : watch individual changes to a list of ResourceQuota. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/resourcequotas DELETE : delete collection of ResourceQuota GET : list or watch objects of kind ResourceQuota POST : create a ResourceQuota /api/v1/watch/namespaces/{namespace}/resourcequotas GET : watch individual changes to a list of ResourceQuota. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/resourcequotas/{name} DELETE : delete a ResourceQuota GET : read the specified ResourceQuota PATCH : partially update the specified ResourceQuota PUT : replace the specified ResourceQuota /api/v1/watch/namespaces/{namespace}/resourcequotas/{name} GET : watch changes to an object of kind ResourceQuota. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/namespaces/{namespace}/resourcequotas/{name}/status GET : read status of the specified ResourceQuota PATCH : partially update status of the specified ResourceQuota PUT : replace status of the specified ResourceQuota 8.2.1. /api/v1/resourcequotas HTTP method GET Description list or watch objects of kind ResourceQuota Table 8.1. HTTP responses HTTP code Reponse body 200 - OK ResourceQuotaList schema 401 - Unauthorized Empty 8.2.2. /api/v1/watch/resourcequotas HTTP method GET Description watch individual changes to a list of ResourceQuota. deprecated: use the 'watch' parameter with a list operation instead. Table 8.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.3. /api/v1/namespaces/{namespace}/resourcequotas HTTP method DELETE Description delete collection of ResourceQuota Table 8.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 8.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ResourceQuota Table 8.5. HTTP responses HTTP code Reponse body 200 - OK ResourceQuotaList schema 401 - Unauthorized Empty HTTP method POST Description create a ResourceQuota Table 8.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.7. Body parameters Parameter Type Description body ResourceQuota schema Table 8.8. HTTP responses HTTP code Reponse body 200 - OK ResourceQuota schema 201 - Created ResourceQuota schema 202 - Accepted ResourceQuota schema 401 - Unauthorized Empty 8.2.4. /api/v1/watch/namespaces/{namespace}/resourcequotas HTTP method GET Description watch individual changes to a list of ResourceQuota. deprecated: use the 'watch' parameter with a list operation instead. Table 8.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.5. /api/v1/namespaces/{namespace}/resourcequotas/{name} Table 8.10. Global path parameters Parameter Type Description name string name of the ResourceQuota HTTP method DELETE Description delete a ResourceQuota Table 8.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 8.12. HTTP responses HTTP code Reponse body 200 - OK ResourceQuota schema 202 - Accepted ResourceQuota schema 401 - Unauthorized Empty HTTP method GET Description read the specified ResourceQuota Table 8.13. HTTP responses HTTP code Reponse body 200 - OK ResourceQuota schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ResourceQuota Table 8.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.15. HTTP responses HTTP code Reponse body 200 - OK ResourceQuota schema 201 - Created ResourceQuota schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ResourceQuota Table 8.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.17. Body parameters Parameter Type Description body ResourceQuota schema Table 8.18. HTTP responses HTTP code Reponse body 200 - OK ResourceQuota schema 201 - Created ResourceQuota schema 401 - Unauthorized Empty 8.2.6. /api/v1/watch/namespaces/{namespace}/resourcequotas/{name} Table 8.19. Global path parameters Parameter Type Description name string name of the ResourceQuota HTTP method GET Description watch changes to an object of kind ResourceQuota. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 8.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.7. /api/v1/namespaces/{namespace}/resourcequotas/{name}/status Table 8.21. Global path parameters Parameter Type Description name string name of the ResourceQuota HTTP method GET Description read status of the specified ResourceQuota Table 8.22. HTTP responses HTTP code Reponse body 200 - OK ResourceQuota schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ResourceQuota Table 8.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.24. HTTP responses HTTP code Reponse body 200 - OK ResourceQuota schema 201 - Created ResourceQuota schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ResourceQuota Table 8.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.26. Body parameters Parameter Type Description body ResourceQuota schema Table 8.27. HTTP responses HTTP code Reponse body 200 - OK ResourceQuota schema 201 - Created ResourceQuota schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/schedule_and_quota_apis/resourcequota-v1
|
Chapter 10. Configuring TLS security profiles
|
Chapter 10. Configuring TLS security profiles TLS security profiles provide a way for servers to regulate which ciphers a client can use when connecting to the server. This ensures that OpenShift Container Platform components use cryptographic libraries that do not allow known insecure protocols, ciphers, or algorithms. Cluster administrators can choose which TLS security profile to use for each of the following components: the Ingress Controller the control plane This includes the Kubernetes API server, Kubernetes controller manager, Kubernetes scheduler, OpenShift API server, OpenShift OAuth API server, OpenShift OAuth server, and etcd. the kubelet, when it acts as an HTTP server for the Kubernetes API server 10.1. Understanding TLS security profiles You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by various OpenShift Container Platform components. The OpenShift Container Platform TLS security profiles are based on Mozilla recommended configurations . You can specify one of the following TLS security profiles for each component: Table 10.1. TLS security profiles Profile Description Old This profile is intended for use with legacy clients or libraries. The profile is based on the Old backward compatibility recommended configuration. The Old profile requires a minimum TLS version of 1.0. Note For the Ingress Controller, the minimum TLS version is converted from 1.0 to 1.1. Intermediate This profile is the recommended configuration for the majority of clients. It is the default TLS security profile for the Ingress Controller, kubelet, and control plane. The profile is based on the Intermediate compatibility recommended configuration. The Intermediate profile requires a minimum TLS version of 1.2. Modern This profile is intended for use with modern clients that have no need for backwards compatibility. This profile is based on the Modern compatibility recommended configuration. The Modern profile requires a minimum TLS version of 1.3. Custom This profile allows you to define the TLS version and ciphers to use. Warning Use caution when using a Custom profile, because invalid configurations can cause problems. Note When using one of the predefined profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 might cause a new profile configuration to be applied, resulting in a rollout. 10.2. Viewing TLS security profile details You can view the minimum TLS version and ciphers for the predefined TLS security profiles for each of the following components: Ingress Controller, control plane, and kubelet. Important The effective configuration of minimum TLS version and list of ciphers for a profile might differ between components. Procedure View details for a specific TLS security profile: USD oc explain <component>.spec.tlsSecurityProfile.<profile> 1 1 For <component> , specify ingresscontroller , apiserver , or kubeletconfig . For <profile> , specify old , intermediate , or custom . For example, to check the ciphers included for the intermediate profile for the control plane: USD oc explain apiserver.spec.tlsSecurityProfile.intermediate Example output KIND: APIServer VERSION: config.openshift.io/v1 DESCRIPTION: intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: TLSv1.2 View all details for the tlsSecurityProfile field of a component: USD oc explain <component>.spec.tlsSecurityProfile 1 1 For <component> , specify ingresscontroller , apiserver , or kubeletconfig . For example, to check all details for the tlsSecurityProfile field for the Ingress Controller: USD oc explain ingresscontroller.spec.tlsSecurityProfile Example output KIND: IngressController VERSION: operator.openshift.io/v1 RESOURCE: tlsSecurityProfile <Object> DESCRIPTION: ... FIELDS: custom <> custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: TLSv1.1 intermediate <> intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ... 1 modern <> modern is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility and looks like this (yaml): ... 2 NOTE: Currently unsupported. old <> old is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility and looks like this (yaml): ... 3 type <string> ... 1 Lists ciphers and minimum version for the intermediate profile here. 2 Lists ciphers and minimum version for the modern profile here. 3 Lists ciphers and minimum version for the old profile here. 10.3. Configuring the TLS security profile for the Ingress Controller To configure a TLS security profile for an Ingress Controller, edit the IngressController custom resource (CR) to specify a predefined or custom TLS security profile. If a TLS security profile is not configured, the default value is based on the TLS security profile set for the API server. Sample IngressController CR that configures the Old TLS security profile apiVersion: operator.openshift.io/v1 kind: IngressController ... spec: tlsSecurityProfile: old: {} type: Old ... The TLS security profile defines the minimum TLS version and the TLS ciphers for TLS connections for Ingress Controllers. You can see the ciphers and the minimum TLS version of the configured TLS security profile in the IngressController custom resource (CR) under Status.Tls Profile and the configured TLS security profile under Spec.Tls Security Profile . For the Custom TLS security profile, the specific ciphers and minimum TLS version are listed under both parameters. Note The HAProxy Ingress Controller image supports TLS 1.3 and the Modern profile. The Ingress Operator also converts the TLS 1.0 of an Old or Custom profile to 1.1 . Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the IngressController CR in the openshift-ingress-operator project to configure the TLS security profile: USD oc edit IngressController default -n openshift-ingress-operator Add the spec.tlsSecurityProfile field: Sample IngressController CR for a Custom profile apiVersion: operator.openshift.io/v1 kind: IngressController ... spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 ... 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. Save the file to apply the changes. Verification Verify that the profile is set in the IngressController CR: USD oc describe IngressController default -n openshift-ingress-operator Example output Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController ... Spec: ... Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom ... 10.4. Configuring the TLS security profile for the control plane To configure a TLS security profile for the control plane, edit the APIServer custom resource (CR) to specify a predefined or custom TLS security profile. Setting the TLS security profile in the APIServer CR propagates the setting to the following control plane components: Kubernetes API server Kubernetes controller manager Kubernetes scheduler OpenShift API server OpenShift OAuth API server OpenShift OAuth server etcd If a TLS security profile is not configured, the default TLS security profile is Intermediate . Note The default TLS security profile for the Ingress Controller is based on the TLS security profile set for the API server. Sample APIServer CR that configures the Old TLS security profile apiVersion: config.openshift.io/v1 kind: APIServer ... spec: tlsSecurityProfile: old: {} type: Old ... The TLS security profile defines the minimum TLS version and the TLS ciphers required to communicate with the control plane components. You can see the configured TLS security profile in the APIServer custom resource (CR) under Spec.Tls Security Profile . For the Custom TLS security profile, the specific ciphers and minimum TLS version are listed. Note The control plane does not support TLS 1.3 as the minimum TLS version; the Modern profile is not supported because it requires TLS 1.3 . Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the default APIServer CR to configure the TLS security profile: USD oc edit APIServer cluster Add the spec.tlsSecurityProfile field: Sample APIServer CR for a Custom profile apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. Save the file to apply the changes. Verification Verify that the TLS security profile is set in the APIServer CR: USD oc describe apiserver cluster Example output Name: cluster Namespace: ... API Version: config.openshift.io/v1 Kind: APIServer ... Spec: Audit: Profile: Default Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom ... Verify that the TLS security profile is set in the etcd CR: USD oc describe etcd cluster Example output Name: cluster Namespace: ... API Version: operator.openshift.io/v1 Kind: Etcd ... Spec: Log Level: Normal Management State: Managed Observed Config: Serving Info: Cipher Suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 Min TLS Version: VersionTLS12 ... 10.5. Configuring the TLS security profile for the kubelet To configure a TLS security profile for the kubelet when it is acting as an HTTP server, create a KubeletConfig custom resource (CR) to specify a predefined or custom TLS security profile for specific nodes. If a TLS security profile is not configured, the default TLS security profile is Intermediate . The kubelet uses its HTTP/GRPC server to communicate with the Kubernetes API server, which sends commands to pods, gathers logs, and run exec commands on pods through the kubelet. Sample KubeletConfig CR that configures the Old TLS security profile on worker nodes apiVersion: config.openshift.io/v1 kind: KubeletConfig ... spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" #... You can see the ciphers and the minimum TLS version of the configured TLS security profile in the kubelet.conf file on a configured node. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Create a KubeletConfig CR to configure the TLS security profile: Sample KubeletConfig CR for a Custom profile apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 4 #... 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. 4 Optional: Specify the machine config pool label for the nodes you want to apply the TLS security profile. Create the KubeletConfig object: USD oc create -f <filename> Depending on the number of worker nodes in the cluster, wait for the configured nodes to be rebooted one by one. Verification To verify that the profile is set, perform the following steps after the nodes are in the Ready state: Start a debug session for a configured node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: sh-4.4# chroot /host View the kubelet.conf file: sh-4.4# cat /etc/kubernetes/kubelet.conf Example output "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", #... "tlsCipherSuites": [ "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256" ], "tlsMinVersion": "VersionTLS12", #...
|
[
"oc explain <component>.spec.tlsSecurityProfile.<profile> 1",
"oc explain apiserver.spec.tlsSecurityProfile.intermediate",
"KIND: APIServer VERSION: config.openshift.io/v1 DESCRIPTION: intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: TLSv1.2",
"oc explain <component>.spec.tlsSecurityProfile 1",
"oc explain ingresscontroller.spec.tlsSecurityProfile",
"KIND: IngressController VERSION: operator.openshift.io/v1 RESOURCE: tlsSecurityProfile <Object> DESCRIPTION: FIELDS: custom <> custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: TLSv1.1 intermediate <> intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ... 1 modern <> modern is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility and looks like this (yaml): ... 2 NOTE: Currently unsupported. old <> old is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility and looks like this (yaml): ... 3 type <string>",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: old: {} type: Old",
"oc edit IngressController default -n openshift-ingress-operator",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11",
"oc describe IngressController default -n openshift-ingress-operator",
"Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController Spec: Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom",
"apiVersion: config.openshift.io/v1 kind: APIServer spec: tlsSecurityProfile: old: {} type: Old",
"oc edit APIServer cluster",
"apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11",
"oc describe apiserver cluster",
"Name: cluster Namespace: API Version: config.openshift.io/v1 Kind: APIServer Spec: Audit: Profile: Default Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom",
"oc describe etcd cluster",
"Name: cluster Namespace: API Version: operator.openshift.io/v1 Kind: Etcd Spec: Log Level: Normal Management State: Managed Observed Config: Serving Info: Cipher Suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 Min TLS Version: VersionTLS12",
"apiVersion: config.openshift.io/v1 kind: KubeletConfig spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" #",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 4 #",
"oc create -f <filename>",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-4.4# cat /etc/kubernetes/kubelet.conf",
"\"kind\": \"KubeletConfiguration\", \"apiVersion\": \"kubelet.config.k8s.io/v1beta1\", # \"tlsCipherSuites\": [ \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256\", \"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256\" ], \"tlsMinVersion\": \"VersionTLS12\", #"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/security_and_compliance/tls-security-profiles
|
10.3. Modifying Quorum Options (Red Hat Enterprise Linux 7.3 and later)
|
10.3. Modifying Quorum Options (Red Hat Enterprise Linux 7.3 and later) As of Red Hat Enterprise Linux 7.3, you can modify general quorum options for your cluster with the pcs quorum update command. Executing this command requires that the cluster be stopped. For information on the quorum options, see the votequorum (5) man page. The format of the pcs quorum update command is as follows. The following series of commands modifies the wait_for_all quorum option and displays the updated status of the option. Note that the system does not allow you to execute this command while the cluster is running.
|
[
"pcs quorum update [auto_tie_breaker=[0|1]] [last_man_standing=[0|1]] [last_man_standing_window=[ time-in-ms ] [wait_for_all=[0|1]]",
"pcs quorum update wait_for_all=1 Checking corosync is not running on nodes Error: node1: corosync is running Error: node2: corosync is running pcs cluster stop --all node2: Stopping Cluster (pacemaker) node1: Stopping Cluster (pacemaker) node1: Stopping Cluster (corosync) node2: Stopping Cluster (corosync) pcs quorum update wait_for_all=1 Checking corosync is not running on nodes node2: corosync is not running node1: corosync is not running Sending updated corosync.conf to nodes node1: Succeeded node2: Succeeded pcs quorum config Options: wait_for_all: 1"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-quorumoptmodify-HAAR
|
29.3. Protecting Keytabs
|
29.3. Protecting Keytabs To protect Kerberos keytabs from other users with access to the server, restrict access to the keytab to only the keytab owner. It is recommended to protect the keytabs right after they are retrieved. For example, to protect the Apache keytab at /etc/httpd/conf/ipa.keytab : Set the owner of the file to apache . Set the permissions for the file to 0600 . This grants read, write, and execute permissions to the owner.
|
[
"chown apache /etc/httpd/conf/ipa.keytab",
"chmod 0600 /etc/httpd/conf/ipa.keytab"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/kerberos-protecting-keytabs
|
Chapter 8. Important links
|
Chapter 8. Important links Red Hat AMQ Broker 7.11 Release Notes Red Hat AMQ Broker 7.10 Release Notes Red Hat AMQ Broker 7.9 Release Notes Red Hat AMQ Broker 7.8 Release Notes Red Hat AMQ Broker 7.7 Release Notes Red Hat AMQ Broker 7.6 Release Notes Red Hat AMQ Broker 7.1 to 7.5 Release Notes (aggregated) Red Hat AMQ 7 Supported Configurations Red Hat AMQ 7 Component Details Revised on 2025-03-18 14:05:28 UTC
| null |
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/release_notes_for_red_hat_amq_broker_7.12/links
|
probe::signal.syskill
|
probe::signal.syskill Name probe::signal.syskill - Sending kill signal to a process Synopsis signal.syskill Values sig_pid The PID of the process receiving the signal sig The specific signal sent to the process name Name of the probe point pid_name The name of the signal recipient sig_name A string representation of the signal task A task handle to the signal recipient
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-signal-syskill
|
Operating
|
Operating Red Hat Advanced Cluster Security for Kubernetes 4.6 Operating Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/operating/index
|
Appendix E. Messaging Journal Configuration Elements
|
Appendix E. Messaging Journal Configuration Elements The table below lists all of the configuration elements related to the AMQ Broker messaging journal. Table E.1. Address Setting Elements Name Description journal-directory The directory where the message journal is located. The default value is <broker_instance_dir> /data/journal . For the best performance, the journal should be located on its own physical volume in order to minimize disk head movement. If the journal is on a volume that is shared with other processes that may be writing other files (for example, bindings journal, database, or transaction coordinator) then the disk head may well be moving rapidly between these files as it writes them, thus drastically reducing performance. When using a SAN, each journal instance should be given its own LUN (logical unit). create-journal-dir If set to true , the journal directory will be automatically created at the location specified in journal-directory if it does not already exist. The default value is true . journal-type Valid values are NIO or ASYNCIO . If set to NIO , the broker uses Java NIO interface to itsjournal. Set to ASYNCIO , and the broker will use the Linux asynchronous IO journal. If you choose ASYNCIO but are not running Linux or you do not have libaio installed then the broker will detect this and automatically fall back to using NIO . journal-sync-transactional If set to true , the broker flushes all transaction data to disk on transaction boundaries (that is, commit, prepare, and rollback). The default value is true . journal-sync-non-transactional If set to true , the broker flushes non-transactional message data (sends and acknowledgements) to disk each time. The default value is true . journal-file-size The size of each journal file in bytes. The default value is 10485760 bytes (10MiB). journal-min-files The minimum number of files the broker pre-creates when starting. Files are pre-created only if there is no existing message data. Depending on how much data you expect your queues to contain at steady state, you should tune this number of files to match the total amount of data expected. journal-pool-files The system will create as many files as needed; however, when reclaiming files it will shrink back to journal-pool-files . The default value is -1 , meaning it will never delete files on the journal once created. The system cannot grow infinitely, however, as you are still required to use paging for destinations that can grow indefinitely. journal-max-io Controls the maximum number of write requests that can be in the IO queue at any one time. If the queue becomes full then writes will block until space is freed up. When using NIO, this value should always be 1 . When using AIO, the default value is 500 . The total max AIO can't be higher than the value set at the OS level ( /proc/sys/fs/aio-max-nr ), which is usually at 65536. journal-buffer-timeout Controls the timeout for when the buffer will be flushed. AIO can typically withstand with a higher flush rate than NIO, so the system maintains different default values for both NIO and AIO. The default value for NIO is 3333333 nanoseconds, or 300 times per second, and the default value for AIO is 50000 nanoseconds, or 2000 times per second. Note By increasing the timeout value, you might be able to increase system throughput at the expense of latency, since the default values are chosen to give a reasonable balance between throughput and latency. journal-buffer-size The size of the timed buffer on AIO. The default value is 490KiB . journal-compact-min-files The minimal number of files necessary before the broker compacts the journal. The compacting algorithm will not start until you have at least journal-compact-min-files . The default value is 10 . Note Setting the value to 0 will disable compacting and could be dangerous because the journal could grow indefinitely. journal-compact-percentage The threshold to start compacting. Journal data will be compacted if less than journal-compact-percentage is determined to be live data. Note also that compacting will not start until you have at least journal-compact-min-files data files on the journal. The default value is 30 .
| null |
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/configuring_amq_broker/configuring_message_journal
|
Chapter 131. KafkaConnectorSpec schema reference
|
Chapter 131. KafkaConnectorSpec schema reference Used in: KafkaConnector Property Description class The Class for the Kafka Connector. string tasksMax The maximum number of tasks for the Kafka Connector. integer autoRestart Automatic restart of connector and tasks configuration. AutoRestart config The Kafka Connector configuration. The following properties cannot be set: connector.class, tasks.max. map pause Whether the connector should be paused. Defaults to false. boolean
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaConnectorSpec-reference
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Please let us know how we could make it better. To do so: For simple comments on specific passages, make sure you are viewing the documentation in the HTML format. Highlight the part of text that you want to comment on. Then, click the Add Feedback pop-up that appears below the highlighted text, and follow the displayed instructions. For submitting more complex feedback, create a Bugzilla ticket: Go to the Bugzilla website. As the Component, use Documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/managing_systems_using_the_rhel_7_web_console/proc_providing-feedback-on-red-hat-documentation_system-management-using-the-rhel-7-web-console
|
2.7. Design-Time and Runtime Metadata
|
2.7. Design-Time and Runtime Metadata Teiid Designer software distinguishes between design-time metadata and runtime metadata. This distinction becomes important if you use the JBoss Data Virtualization Server. Design-time data is laden with details and representations that help the user understand and efficiently organize metadata. Much of that detail is unnecessary to the underlying system that runs the Virtual Database that you will create. Any information that is not absolutely necessary to running the Virtual Database is stripped out of the runtime metadata to ensure maximum system performance.
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/Design-Time_and_Runtime_Metadata
|
probe::nfs.aop.write_begin
|
probe::nfs.aop.write_begin Name probe::nfs.aop.write_begin - NFS client begin to write data Synopsis nfs.aop.write_begin Values __page the address of page page_index offset within mapping, can used a page identifier and position identifier in the page frame size write bytes to end address of this write operation ino inode number offset start address of this write operation dev device identifier Description Occurs when write operation occurs on nfs. It prepare a page for writing, look for a request corresponding to the page. If there is one, and it belongs to another file, it flush it out before it tries to copy anything into the page. Also do the same if it finds a request from an existing dropped page
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfs-aop-write-begin
|
Part V. Deprecated Functionality
|
Part V. Deprecated Functionality This part provides an overview of functionality that has been deprecated in all minor releases up to Red Hat Enterprise Linux 7.4. Deprecated functionality continues to be supported until the end of life of Red Hat Enterprise Linux 7. Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments. For the most recent list of deprecated functionality within a particular major release, refer to the latest version of release documentation. Deprecated hardware components are not recommended for new deployments on the current or future major releases. Hardware driver updates are limited to security and critical fixes only. Red Hat recommends replacing this hardware as soon as reasonably feasible. A package can be deprecated and not recommended for further use. Under certain circumstances, a package can be removed from a product. Product documentation then identifies more recent packages that offer functionality similar, identical, or more advanced to the one deprecated, and provides further recommendations.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/part-red_hat_enterprise_linux-7.4_release_notes-deprecated_functionality
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.