title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Release notes
Release notes Red Hat Satellite 6.16 New features, deprecated and removed features, Technology Previews, known issues, and bug fixes Red Hat Satellite Documentation Team [email protected]
[ "SELECT rolname,rolpassword FROM pg_authid WHERE rolpassword != '';", "hammer capsule content verify-checksum --id My_Capsule_ID", "Unable to load certs Neither PUB key nor PRIV key", "Unable to load certs Neither PUB key nor PRIV key" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html-single/release_notes/index
Chapter 8. Triggering updates on image stream changes
Chapter 8. Triggering updates on image stream changes When an image stream tag is updated to point to a new image, OpenShift Container Platform can automatically take action to roll the new image out to resources that were using the old image. You configure this behavior in different ways depending on the type of resource that references the image stream tag. 8.1. OpenShift Container Platform resources OpenShift Container Platform deployment configurations and build configurations can be automatically triggered by changes to image stream tags. The triggered action can be run using the new value of the image referenced by the updated image stream tag. 8.2. Triggering Kubernetes resources Kubernetes resources do not have fields for triggering, unlike deployment and build configurations, which include as part of their API definition a set of fields for controlling triggers. Instead, you can use annotations in OpenShift Container Platform to request triggering. The annotation is defined as follows: Key: image.openshift.io/triggers Value: [ { "from": { "kind": "ImageStreamTag", 1 "name": "example:latest", 2 "namespace": "myapp" 3 }, "fieldPath": "spec.template.spec.containers[?(@.name==\"web\")].image", 4 "paused": false 5 }, ... ] 1 Required: kind is the resource to trigger from must be ImageStreamTag . 2 Required: name must be the name of an image stream tag. 3 Optional: namespace defaults to the namespace of the object. 4 Required: fieldPath is the JSON path to change. This field is limited and accepts only a JSON path expression that precisely matches a container by ID or index. For pods, the JSON path is "spec.containers[?(@.name='web')].image". 5 Optional: paused is whether or not the trigger is paused, and the default value is false . Set paused to true to temporarily disable this trigger. When one of the core Kubernetes resources contains both a pod template and this annotation, OpenShift Container Platform attempts to update the object by using the image currently associated with the image stream tag that is referenced by trigger. The update is performed against the fieldPath specified. Examples of core Kubernetes resources that can contain both a pod template and annotation include: CronJobs Deployments StatefulSets DaemonSets Jobs ReplicationControllers Pods 8.3. Setting the image trigger on Kubernetes resources When adding an image trigger to deployments, you can use the oc set triggers command. For example, the sample command in this procedure adds an image change trigger to the deployment named example so that when the example:latest image stream tag is updated, the web container inside the deployment updates with the new image value. This command sets the correct image.openshift.io/triggers annotation on the deployment resource. Procedure Trigger Kubernetes resources by entering the oc set triggers command: USD oc set triggers deploy/example --from-image=example:latest -c web Unless the deployment is paused, this pod template update automatically causes a deployment to occur with the new image value.
[ "Key: image.openshift.io/triggers Value: [ { \"from\": { \"kind\": \"ImageStreamTag\", 1 \"name\": \"example:latest\", 2 \"namespace\": \"myapp\" 3 }, \"fieldPath\": \"spec.template.spec.containers[?(@.name==\\\"web\\\")].image\", 4 \"paused\": false 5 }, ]", "oc set triggers deploy/example --from-image=example:latest -c web" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/images/triggering-updates-on-imagestream-changes
Providing feedback on Workload Availability for Red Hat OpenShift documentation
Providing feedback on Workload Availability for Red Hat OpenShift documentation We appreciate your feedback on our documentation. Let us know how we can improve it. To do so: Go to the JIRA website. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Enter your username in the Reporter field. Enter the affected versions in the Affects Version/s field. Click Create at the bottom of the dialog.
null
https://docs.redhat.com/en/documentation/workload_availability_for_red_hat_openshift/25.1/html/remediation_fencing_and_maintenance/proc_providing-feedback-on-workload-availability-for-red-hat-openshift-documentation_preface
Chapter 6. Converting playbooks for AAP2
Chapter 6. Converting playbooks for AAP2 With Ansible Automation Platform 2 and its containerised execution environments, the usage of localhost has been altered. In versions of Ansible Automation Platform, a job would run against localhost , which translated into running on the underlying Automation Controller host. This could be used to store data and persistent artifacts. With Ansible Automation Platform 2, localhost means you are running inside a container, which is ephemeral in nature. Localhost is no longer tied to a particular host, and with portable execution environments, this means it can run anywhere with the right environment and software prerequisites already embedded into the execution environment container. 6.1. Persisting data from auto runs Consider the local automation controller filesystem as counterproductive since that ties the data to that host. If you have a multi-node cluster, then you can contact a different host each time, causing issues if you are creating workflows that depend on each other and created directories. For example, if a directory was only created in one node while another node runs the playbook, the results would be inconsistent. The solution is to use some form of shared storage solution, such as Amazon S3, Gist, or a role to rsync data to your data endpoint. The option exists of injecting data or a configuration into a container at runtime. This can be achieved by using the automation controller's isolated jobs path option. This provides a way to mount directories and files into an execution environment at runtime. This is achieved through the automation mesh, using ansible-runner to inject them into a Podman container to start the automation. What follows are some of the use cases for using isolated job paths: Providing SSL certificates at runtime, rather than baking them into an execution environment. Passing runtime configuration data, such as SSH config settings, but could be anything you want to use during automation. Reading and writing to files used before, during and after automation runs. There are caveats to utilization: The volume mount has to pre-exist on all nodes capable of automation execution (so hybrid control plane nodes and all execution nodes). Where SELinux is enabled (Ansible Automation Platform default) beware of file permissions. This is important since rootless Podman is run on non-OCP based installs. The caveats need to be carefully observed. It is highly recommended to read up on rootless Podman and the Podman volume mount runtime options, the [:OPTIONS] part of the isolated job paths, as this is what is used inside Ansible Automation Platform 2. Additional resources Understanding rootless Podman . Podman volume mount runtime options . 6.1.1. Converting playbook examples Examples This example is of a shared directory called /mydata in which we want to be able to read and write files to during a job run. Remember this has to already exist on the execution node we will be using for the automation run. You will target the aape1.local execution node to run this job, because the underlying hosts already has this in place. [awx@aape1 ~]USD ls -la /mydata/ total 4 drwxr-xr-x. 2 awx awx 41 Apr 28 09:27 . dr-xr-xr-x. 19 root root 258 Apr 11 15:16 .. -rw-r--r--. 1 awx awx 33 Apr 11 12:34 file_read -rw-r--r--. 1 awx awx 0 Apr 28 09:27 file_write You will use a simple playbook to launch the automation with sleep defined to allow you access, and to understand the process, as well as demonstrate reading and writing to files. # vim:ft=ansible: - hosts: all gather_facts: false ignore_errors: yes vars: period: 120 myfile: /mydata/file tasks: - name: Collect only selected facts ansible.builtin.setup: filter: - 'ansible_distribution' - 'ansible_machine_id' - 'ansible_memtotal_mb' - 'ansible_memfree_mb' - name: "I'm feeling real sleepy..." ansible.builtin.wait_for: timeout: "{{ period }}" delegate_to: localhost - ansible.builtin.debug: msg: "Isolated paths mounted into execution node: {{ AWX_ISOLATIONS_PATHS }}" - name: "Read pre-existing file..." ansible.builtin.debug: msg: "{{ lookup('file', '{{ myfile }}_read' - name: "Write to a new file..." ansible.builtin.copy: dest: "{{ myfile }}_write" content: | This is the file I've just written to. - name: "Read written out file..." ansible.builtin.debug: msg: "{{ lookup('file', '{{ myfile }}_write') }}" From the Ansible Automation Platform 2 navigation panel, select Settings . Then select Job settings from the Jobs option. Paths to expose isolated jobs: [ "/mydata:/mydata:rw" ] The volume mount is mapped with the same name in the container and has read-write capability. This will get used when you launch the job template. The prompt on launch should be set for extra_vars so you can adjust the sleep duration for each run, The default is 30 seconds. Once launched, and the wait_for module is invoked for the sleep, you can go onto the execution node and look at what is running. To verify the run has completed successfully, run this command to get an output of the job: USD podman exec -it 'podman ps -q' /bin/bash bash-4.4# You are now inside the running execution environment container. Look at the permissions, you will see that awx has become 'root', but this is not really root as in the superuser, as you are using rootless Podman, which maps users into a kernel namespace similar to a sandbox. Learn more about How does rootless Podman work? for shadow-utils. bash-4.4# ls -la /mydata/ Total 4 drwxr-xr-x. 2 root root 41 Apr 28 09:27 . dr-xr-xr-x. 1 root root 77 Apr 28 09:40 .. -rw-r---r-. 1 root root 33 Apr 11 12:34 file_read -rw-r---r-. 1 root root 0 Apr 28 09:27 file_write According to the results, this job failed. In order to understand why, the remaining output needs to be examined. TASK [Read pre-existing file...]******************************* 10:50:12 ok: [localhost] => { "Msg": "This is the file I am reading in." TASK {Write to a new file...}********************************* 10:50:12 An exception occurred during task execution. To see the full traceback, use -vvv. The error was: PermissionError: [Errno 13] Permission denied: b'/mydata/.ansible_tmpazyqyqdrfile_write' -> b' /mydata/file_write' Fatal: [localhost]: FAILED! => {"changed": false, :checksum": "9f576o85d584287a3516ee8b3385cc6f69bf9ce", "msg": "Unable to make b'/root/.ansible/tmp/anisible-tim-1651139412.9808054-40-91081834383738/source' into /mydata/file_write, failed final rename from b'/mydata/.ansible_tmpazyqyqdrfile_write': [Errno 13] Permission denied: b'/mydata/.ansible_tmpazyqyqdrfile_write' -> b'/mydata/file_write} ...ignoring TASK [Read written out file...] ****************************** 10:50:13 Fatal: [localhost]: FAILED: => {"msg": "An unhandled exception occurred while running the lookup plugin 'file'. Error was a <class 'ansible.errors.AnsibleError;>, original message: could not locate file in lookup: /mydate/file_write. Vould not locate file in lookup: /mydate/file_write"} ...ignoring The job failed, even though :rw is set, so it should have write capability. The process was able to read the existing file, but not write out. This is due to SELinux protection that requires proper labels to be placed on the volume content mounted into the container. If the label is missing, SELinux may prevent the process from running inside the container. Labels set by the OS are not changed by Podman. See the Podman documentation for more information. This could be a common misinterpretation. We have set the default to :z , which tells Podman to relabel file objects on shared volumes. So we can either add :z or leave it off. Paths to expose isolated jobs: [ "/mydata:/mydata" ] The playbook will now work as expected: PLAY [all] **************************************************** 11:05:52 TASK [I'm feeling real sleepy. . .] *************************** 11:05:52 ok: [localhost] TASK [Read pre-existing file...] ****************************** 11:05:57 ok: [localhost] => { "Msg": "This is the file I'm reading in." } TASK [Write to a new file...] ********************************** 11:05:57 ok: [localhost] TASK [Read written out file...] ******************************** 11:05:58 ok: [localhost] => { "Msg": "This is the file I've just written to." Back on the underlying execution node host, we have the newly written out contents. Note If you are using container groups to launch automation jobs inside Red Hat OpenShift, you can also tell Ansible Automation Platform 2 to expose the same paths to that environment, but you must toggle the default to On under settings. Once enabled, this will inject this as volumeMounts and volumes inside the pod spec that will be used for execution. It will look like this: apiVersion: v1 kind: Pod Spec: containers: - image: registry.redhat.io/ansible-automation-platform-24/ee-minimal-rhel8 args: - ansible runner - worker - -private-data-dir=/runner volumeMounts: mountPath: /mnt2 name: volume-0 readOnly: true mountPath: /mnt3 name: volume-1 readOnly: true mountPath: /mnt4 name: volume-2 readOnly: true volumes: hostPath: path: /mnt2 type: "" name: volume-0 hostPath: path: /mnt3 type: "" name: volume-1 hostPath: path: /mnt4 type: "" name: volume-2 Storage inside the running container is using the overlay file system. Any modifications inside the running container are destroyed after the job completes, much like a tmpfs being unmounted.
[ "[awx@aape1 ~]USD ls -la /mydata/ total 4 drwxr-xr-x. 2 awx awx 41 Apr 28 09:27 . dr-xr-xr-x. 19 root root 258 Apr 11 15:16 .. -rw-r--r--. 1 awx awx 33 Apr 11 12:34 file_read -rw-r--r--. 1 awx awx 0 Apr 28 09:27 file_write", "vim:ft=ansible:", "- hosts: all gather_facts: false ignore_errors: yes vars: period: 120 myfile: /mydata/file tasks: - name: Collect only selected facts ansible.builtin.setup: filter: - 'ansible_distribution' - 'ansible_machine_id' - 'ansible_memtotal_mb' - 'ansible_memfree_mb' - name: \"I'm feeling real sleepy...\" ansible.builtin.wait_for: timeout: \"{{ period }}\" delegate_to: localhost - ansible.builtin.debug: msg: \"Isolated paths mounted into execution node: {{ AWX_ISOLATIONS_PATHS }}\" - name: \"Read pre-existing file...\" ansible.builtin.debug: msg: \"{{ lookup('file', '{{ myfile }}_read' - name: \"Write to a new file...\" ansible.builtin.copy: dest: \"{{ myfile }}_write\" content: | This is the file I've just written to. - name: \"Read written out file...\" ansible.builtin.debug: msg: \"{{ lookup('file', '{{ myfile }}_write') }}\"", "[ \"/mydata:/mydata:rw\" ]", "podman exec -it 'podman ps -q' /bin/bash bash-4.4#", "bash-4.4# ls -la /mydata/ Total 4 drwxr-xr-x. 2 root root 41 Apr 28 09:27 . dr-xr-xr-x. 1 root root 77 Apr 28 09:40 .. -rw-r---r-. 1 root root 33 Apr 11 12:34 file_read -rw-r---r-. 1 root root 0 Apr 28 09:27 file_write", "TASK [Read pre-existing file...]******************************* 10:50:12 ok: [localhost] => { \"Msg\": \"This is the file I am reading in.\" TASK {Write to a new file...}********************************* 10:50:12 An exception occurred during task execution. To see the full traceback, use -vvv. The error was: PermissionError: [Errno 13] Permission denied: b'/mydata/.ansible_tmpazyqyqdrfile_write' -> b' /mydata/file_write' Fatal: [localhost]: FAILED! => {\"changed\": false, :checksum\": \"9f576o85d584287a3516ee8b3385cc6f69bf9ce\", \"msg\": \"Unable to make b'/root/.ansible/tmp/anisible-tim-1651139412.9808054-40-91081834383738/source' into /mydata/file_write, failed final rename from b'/mydata/.ansible_tmpazyqyqdrfile_write': [Errno 13] Permission denied: b'/mydata/.ansible_tmpazyqyqdrfile_write' -> b'/mydata/file_write} ...ignoring TASK [Read written out file...] ****************************** 10:50:13 Fatal: [localhost]: FAILED: => {\"msg\": \"An unhandled exception occurred while running the lookup plugin 'file'. Error was a <class 'ansible.errors.AnsibleError;>, original message: could not locate file in lookup: /mydate/file_write. Vould not locate file in lookup: /mydate/file_write\"} ...ignoring", "[ \"/mydata:/mydata\" ]", "PLAY [all] **************************************************** 11:05:52 TASK [I'm feeling real sleepy. . .] *************************** 11:05:52 ok: [localhost] TASK [Read pre-existing file...] ****************************** 11:05:57 ok: [localhost] => { \"Msg\": \"This is the file I'm reading in.\" } TASK [Write to a new file...] ********************************** 11:05:57 ok: [localhost] TASK [Read written out file...] ******************************** 11:05:58 ok: [localhost] => { \"Msg\": \"This is the file I've just written to.\"", "apiVersion: v1 kind: Pod Spec: containers: - image: registry.redhat.io/ansible-automation-platform-24/ee-minimal-rhel8 args: - ansible runner - worker - -private-data-dir=/runner volumeMounts: mountPath: /mnt2 name: volume-0 readOnly: true mountPath: /mnt3 name: volume-1 readOnly: true mountPath: /mnt4 name: volume-2 readOnly: true volumes: hostPath: path: /mnt2 type: \"\" name: volume-0 hostPath: path: /mnt3 type: \"\" name: volume-1 hostPath: path: /mnt4 type: \"\" name: volume-2" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_upgrade_and_migration_guide/converting-playbooks-for-aap2
Chapter 3. Usage
Chapter 3. Usage This chapter describes the necessary steps for rebuilding and using Red Hat Software Collections 3.3, and deploying applications that use Red Hat Software Collections. 3.1. Using Red Hat Software Collections 3.1.1. Running an Executable from a Software Collection To run an executable from a particular Software Collection, type the following command at a shell prompt: scl enable software_collection ... ' command ...' Or, alternatively, use the following command: scl enable software_collection ... -- command ... Replace software_collection with a space-separated list of Software Collections you want to use and command with the command you want to run. For example, to execute a Perl program stored in a file named hello.pl with the Perl interpreter from the perl526 Software Collection, type: You can execute any command using the scl utility, causing it to be run with the executables from a selected Software Collection in preference to their possible Red Hat Enterprise Linux system equivalents. For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections 3.3 Components" . 3.1.2. Running a Shell Session with a Software Collection as Default To start a new shell session with executables from a selected Software Collection in preference to their Red Hat Enterprise Linux equivalents, type the following at a shell prompt: scl enable software_collection ... bash Replace software_collection with a space-separated list of Software Collections you want to use. For example, to start a new shell session with the python27 and rh-postgresql10 Software Collections as default, type: The list of Software Collections that are enabled in the current session is stored in the USDX_SCLS environment variable, for instance: For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections 3.3 Components" . 3.1.3. Running a System Service from a Software Collection Running a System Service from a Software Collection in Red Hat Enterprise Linux 6 Software Collections that include system services install corresponding init scripts in the /etc/rc.d/init.d/ directory. To start such a service in the current session, type the following at a shell prompt as root : service software_collection - service_name start Replace software_collection with the name of the Software Collection and service_name with the name of the service you want to start. To configure this service to start automatically at boot time, type the following command as root : chkconfig software_collection - service_name on For example, to start the postgresql service from the rh-postgresql96 Software Collection and enable it in runlevels 2, 3, 4, and 5, type as root : For more information on how to manage system services in Red Hat Enterprise Linux 6, refer to the Red Hat Enterprise Linux 6 Deployment Guide . For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections 3.3 Components" . Running a System Service from a Software Collection in Red Hat Enterprise Linux 7 In Red Hat Enterprise Linux 7, init scripts have been replaced by systemd service unit files, which end with the .service file extension and serve a similar purpose as init scripts. To start a service in the current session, execute the following command as root : systemctl start software_collection - service_name .service Replace software_collection with the name of the Software Collection and service_name with the name of the service you want to start. To configure this service to start automatically at boot time, type the following command as root : systemctl enable software_collection - service_name .service For example, to start the postgresql service from the rh-postgresql10 Software Collection and enable it at boot time, type as root : For more information on how to manage system services in Red Hat Enterprise Linux 7, refer to the Red Hat Enterprise Linux 7 System Administrator's Guide . For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections 3.3 Components" . 3.2. Accessing a Manual Page from a Software Collection Every Software Collection contains a general manual page that describes the content of this component. Each manual page has the same name as the component and it is located in the /opt/rh directory. To read a manual page for a Software Collection, type the following command: scl enable software_collection 'man software_collection ' Replace software_collection with the particular Red Hat Software Collections component. For example, to display the manual page for rh-mariadb102 , type: 3.3. Deploying Applications That Use Red Hat Software Collections In general, you can use one of the following two approaches to deploy an application that depends on a component from Red Hat Software Collections in production: Install all required Software Collections and packages manually and then deploy your application, or Create a new Software Collection for your application and specify all required Software Collections and other packages as dependencies. For more information on how to manually install individual Red Hat Software Collections components, see Section 2.2, "Installing Red Hat Software Collections" . For further details on how to use Red Hat Software Collections, see Section 3.1, "Using Red Hat Software Collections" . For a detailed explanation of how to create a custom Software Collection or extend an existing one, read the Red Hat Software Collections Packaging Guide . 3.4. Red Hat Software Collections Container Images Container images based on Red Hat Software Collections include applications, daemons, and databases. The images can be run on Red Hat Enterprise Linux 7 Server and Red Hat Enterprise Linux Atomic Host. For information about their usage, see Using Red Hat Software Collections 3 Container Images . For details regarding container images based on Red Hat Software Collections versions 2.4 and earlier, see Using Red Hat Software Collections 2 Container Images . The following container images are available with Red Hat Software Collections 3.3: rhscl/mariadb-103-rhel7 rhscl/redis-5-rhel7 rhscl/ruby-26-rhel7 rhscl/devtoolset-8-toolchain-rhel7 rhscl/devtoolset-8-perftools-rhel7 rhscl/varnish-6-rhel7 rhscl/httpd-24-rhel7 The following container images are based on Red Hat Software Collections 3.2: rhscl/mysql-80-rhel7 rhscl/nginx-114-rhel7 rhscl/php-72-rhel7 The following container images are based on Red Hat Software Collections 3.1: rhscl/devtoolset-7-toolchain-rhel7 rhscl/devtoolset-7-perftools-rhel7 rhscl/mongodb-36-rhel7 rhscl/perl-526-rhel7 rhscl/php-70-rhel7 rhscl/postgresql-10-rhel7 rhscl/ruby-25-rhel7 rhscl/varnish-5-rhel7 The following container images are based on Red Hat Software Collections 3.0: rhscl/mariadb-102-rhel7 rhscl/mongodb-34-rhel7 rhscl/nginx-112-rhel7 rhscl/nodejs-8-rhel7 rhscl/php-71-rhel7 rhscl/postgresql-96-rhel7 rhscl/python-36-rhel7 The following container images are based on Red Hat Software Collections 2.4: rhscl/devtoolset-6-toolchain-rhel7 (EOL) rhscl/devtoolset-6-perftools-rhel7 (EOL) rhscl/nginx-110-rhel7 rhscl/nodejs-6-rhel7 (EOL) rhscl/python-27-rhel7 rhscl/ruby-24-rhel7 rhscl/ror-50-rhel7 rhscl/thermostat-16-agent-rhel7 (EOL) rhscl/thermostat-16-storage-rhel7 (EOL) The following container images are based on Red Hat Software Collections 2.3: rhscl/mysql-57-rhel7 rhscl/perl-524-rhel7 rhscl/redis-32-rhel7 rhscl/mongodb-32-rhel7 (EOL) rhscl/php-56-rhel7 (EOL) rhscl/python-35-rhel7 (EOL) rhscl/ruby-23-rhel7 (EOL) The following container images are based on Red Hat Software Collections 2.2: rhscl/devtoolset-4-toolchain-rhel7 (EOL) rhscl/devtoolset-4-perftools-rhel7 (EOL) rhscl/mariadb-101-rhel7 (EOL) rhscl/nginx-18-rhel7 (EOL) rhscl/nodejs-4-rhel7 (EOL) rhscl/postgresql-95-rhel7 (EOL) rhscl/ror-42-rhel7 (EOL) rhscl/thermostat-1-agent-rhel7 (EOL) rhscl/varnish-4-rhel7 (EOL) The following container images are based on Red Hat Software Collections 2.0: rhscl/mariadb-100-rhel7 (EOL) rhscl/mongodb-26-rhel7 (EOL) rhscl/mysql-56-rhel7 (EOL) rhscl/nginx-16-rhel7 (EOL) rhscl/passenger-40-rhel7 (EOL) rhscl/perl-520-rhel7 (EOL) rhscl/postgresql-94-rhel7 (EOL) rhscl/python-34-rhel7 (EOL) rhscl/ror-41-rhel7 (EOL) rhscl/ruby-22-rhel7 (EOL) rhscl/s2i-base-rhel7 Images marked as End of Life (EOL) are no longer supported.
[ "~]USD scl enable rh-perl526 'perl hello.pl' Hello, World!", "~]USD scl enable python27 rh-postgresql10 bash", "~]USD echo USDX_SCLS python27 rh-postgresql10", "~]# service rh-postgresql96-postgresql start Starting rh-postgresql96-postgresql service: [ OK ] ~]# chkconfig rh-postgresql96-postgresql on", "~]# systemctl start rh-postgresql10-postgresql.service ~]# systemctl enable rh-postgresql10-postgresql.service", "~]USD scl enable rh-mariadb102 \"man rh-mariadb102\"" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.3_release_notes/chap-usage
Chapter 13. Topology View
Chapter 13. Topology View Use the Topology View to view node type, node health, and specific details about each node if you already have a mesh topology deployed. To access the topology viewer from the automation controller UI, you must have System Administrator permissions. For more information about automation mesh on a VM-based installation, see the Automation mesh for VM environments . For more information about automation mesh on an operator-based installation, see the Automation mesh for managed cloud or operator environments . 13.1. Accessing the topology viewer Use the following procedure to access the topology viewer from the automation controller UI. Procedure From the navigation panel, select Automation Execution Infrastructure Topology View . The Topology View opens and displays a graphical representation of how each receptor node links together. To adjust the zoom levels, or manipulate the graphic views, use the control icons: zoom-in ( ), zoom-out ( ), expand ( ), and reset ( ) on the toolbar. You can also click and drag to pan around; and scroll using your mouse or trackpad to zoom. The fit-to-screen feature automatically scales the graphic to fit on the screen and repositions it in the center. It is particularly useful when you want to see a large mesh in its entirety. To reset the view to its default view, click the Reset view ( ) icon. Refer to the Legend to identify the type of nodes that are represented. For VM-based installations, see Control and execution planes . For operator-based installations, see Control and execution planes for more information about each type of node. The Legend shows the node status <node_statuses> by color, which is indicative of the health of the node. An Error status in the Legend includes the Unavailable state (as displayed in the Instances list view) plus any future error conditions encountered in later versions of automation controller. The following link statuses are also shown in the Legend: Established : This is a link state that indicates a peer connection between nodes that are either ready, unavailable, or disabled. Adding : This is a link state indicating a peer connection between nodes that were selected to be added to the mesh topology. Removing : This is a link state indicating a peer connection between nodes that were selected to be removed from the topology. Hover over a node and the connectors highlight to show its immediate connected nodes (peers) or click a node to retrieve details about it, such as its hostname, node type, and status. Click the link for instance hostname from the details displayed to be redirected to its Details page that provides more information about that node, most notably for information about an Error status, as in the following example. You can use the Details page to remove the instance, run a health check on the instance on an as-needed basis, or unassign jobs from the instance. By default, jobs can be assigned to each node. However, you can disable it to exclude the node from having any jobs running on it. Additional resources For more information about creating new nodes and scaling the mesh, see Managing capacity with Instances .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_automation_execution/assembly-controller-topology-viewer
Using the Hammer CLI tool
Using the Hammer CLI tool Red Hat Satellite 6.16 Administer Satellite or develop custom scripts by using Hammer, the Satellite command-line tool Red Hat Satellite Documentation Team [email protected]
[ "hammer --help", "hammer organization --help", "hammer | less", "dnf module enable satellite-utils:el8", "dnf install satellite-cli", ":host: 'https:// satellite.example.com '", ":foreman: :username: ' username ' :password: ' password '", "hammer -u username -p password subcommands", ":foreman: :use_sessions: true", "hammer auth login", "hammer settings set --name idle_timeout --value 30 Setting [idle_timeout] updated to [30]", "hammer auth status", "hammer auth logout", "hammer -d --version", ":log_level: 'warning' :log_size: 5 #in MB", ":per_page: 30", "hammer defaults add --param-name organization --param-value \"Your_Organization\"", "hammer defaults add --param-name location --param-value \"Your_Location\"", "hammer defaults list", ":log_level: 'debug'", "hammer shell", "hammer --csv --csv-separator \";\" organization list", "hammer --output output_format organization list", "hammer compute-profile values create --compute-profile-id 22 --compute-resource-id 1 --compute-attributes= '{ \"cpus\": 2, \"corespersocket\": 2, \"memory_mb\": 4096, \"firmware\": \"efi\", \"resource_pool\": \"Resources\", \"cluster\": \"Example_Cluster\", \"guest_id\": \"rhel8\", \"path\": \"/Datacenters/EXAMPLE/vm/\", \"hardware_version\": \"Default\", \"memoryHotAddEnabled\": 0, \"cpuHotAddEnabled\": 0, \"add_cdrom\": 0, \"boot_order\": [ \"disk\", \"network\" ], \"scsi_controllers\":[ { \"type\": \"ParaVirtualSCSIController\", \"key\":1000 }, { \"type\": \"ParaVirtualSCSIController\", \"key\":1001 } ] }'", "hammer ping database: Status: ok Server Response: Duration: 0ms cache: servers: 1) Status: ok Server Response: Duration: 1ms candlepin: Status: ok Server Response: Duration: 17ms candlepin_auth: Status: ok Server Response: Duration: 14ms candlepin_events: Status: ok message: 4 Processed, 0 Failed Server Response: Duration: 0ms katello_events: Status: ok message: 5 Processed, 0 Failed Server Response: Duration: 0ms pulp3: Status: ok Server Response: Duration: 5083ms pulp3_content: Status: ok Server Response: Duration: 5051ms foreman_tasks: Status: ok Server Response: Duration: 2ms", "hammer defaults add --param-name organization_id --param-value org_ID", "hammer defaults add --param-name location_id --param-value loc_ID", "hammer organization create --name org_name", "hammer organization list", "hammer subscription upload --file path", "hammer repository-set enable --product prod_name --basearch base_arch --releasever rel_v --name repo_name", "hammer repository synchronize --product prod_name --name repo_name", "hammer repository create --product prod_name --content-type cont_type --publish-via-http true --url repo_url --name repo_name", "hammer repository upload-content --product prod_name --id repo_id --path path_to_dir", "hammer lifecycle-environment create --name env_name --description env_desc --prior prior_env_name", "hammer lifecycle-environment list", "hammer content-view create --name cv_n --repository-ids repo_ID1,... --description cv_description", "hammer content-view add-repository --name cv_n --repository-id repo_ID", "hammer content-view puppet-module add --content-view cv_n --name module_name", "hammer content-view publish --id cv_ID", "hammer content-view version promote --content-view cv_n --to-lifecycle-environment env_name", "hammer content-view version incremental-update --content-view-version-id cv_ID --packages pkg_n1,... --lifecycle-environment-ids env_ID1,", "hammer domain create --name domain_name", "hammer subnet create --name subnet_name --organization-ids org_ID1,... --location-ids loc_ID1,... --domain-ids dom_ID1,... --boot-mode boot_mode --network network_address --mask netmask --ipam ipam", "hammer compute-resource create --name cr_name --organization-ids org_ID1,... --location-ids loc_ID1,... --provider provider_name", "hammer medium create --name med_name --path path_to_medium", "hammer partition-table create --name tab_name --path path_to_file --os-family os_family", "hammer template create --name tmp_name --file path_to_template", "hammer os create --name os_name --version version_num", "hammer activation-key create --name ak_name --content-view cv_n --lifecycle-environment lc_name", "hammer activation-key add-subscription --id ak_ID --subscription-id sub_ID", "hammer user create --login user_name --mail user_mail --auth-source-id 1 --organization-ids org_ID1,org_ID2,", "hammer user add-role --id user_id --role role_name", "hammer user-group create --name ug_name", "hammer user-group add-role --id ug_id --role role_name", "hammer role create --name role_name", "hammer filter create --role role_name --permission-ids perm_ID1,perm_ID2,", "hammer erratum list", "hammer erratum list --cve CVE", "hammer erratum info --id err_ID", "hammer host errata list --host host_name", "hammer host errata apply --host host_name --errata-ids err_ID1,err_ID2,", "hammer hostgroup create --name hg_name --puppet-environment env_name --architecture arch_name --domain domain_name --subnet subnet_name --puppet-proxy proxy_name --puppet-ca-proxy ca-proxy_name --operatingsystem os_name --partition-table table_name --medium medium_name --organization-ids org_ID1,... --location-ids loc_ID1,", "hammer hostgroup set-parameter --hostgroup \"hg_name\" --name \"kt_activation_keys\" --value key_name", "hammer host create --name host_name --hostgroup hg_name --interface=\"primary=true, mac= mac_addr , ip= ip_addr , provision=true\" --organization-id org_ID --location-id loc_ID --ask-root-password yes", "hammer host update --name host_name --hostgroup NIL", "hammer job-template create --file path --name template_name --provider-type SSH --job-category category_name", "hammer job-invocation create --job-template template_name --inputs key1= value,... --search-query query", "hammer job-invocation output --id job_id --host host_name", "hammer task list Monitor progress of a running task: hammer task progress --id task_ID", "hammer [OPTIONS] SUBCOMMAND [ARG]", "hammer activation-key [OPTIONS] SUBCOMMAND [ARG]", "hammer activation-key add-host-collection [OPTIONS]", "hammer activation-key add-subscription [OPTIONS]", "hammer activation-key content-override [OPTIONS]", "hammer activation-key copy [OPTIONS]", "hammer activation-key create [OPTIONS]", "hammer activation-key <delete|destroy> [OPTIONS]", "hammer activation-key host-collections [OPTIONS]", "hammer activation-key <info|show> [OPTIONS]", "hammer activation-key <list|index> [OPTIONS]", "hammer activation-key product-content [OPTIONS]", "hammer activation-key remove-host-collection [OPTIONS]", "hammer activation-key remove-subscription [OPTIONS]", "hammer activation-key subscriptions [OPTIONS]", "hammer activation-key update [OPTIONS]", "hammer admin [OPTIONS] SUBCOMMAND [ARG]", "hammer admin logging [OPTIONS]", "hammer alternate-content-source [OPTIONS] SUBCOMMAND [ARG]", "hammer alternate-content-source bulk [OPTIONS] SUBCOMMAND [ARG]", "hammer alternate-content-source bulk destroy [OPTIONS]", "hammer alternate-content-source bulk refresh [OPTIONS]", "hammer alternate-content-source bulk refresh-all [OPTIONS]", "hammer alternate-content-source create [OPTIONS]", "hammer alternate-content-source <delete|destroy> [OPTIONS]", "hammer alternate-content-source <info|show> [OPTIONS]", "hammer alternate-content-source <list|index> [OPTIONS]", "hammer alternate-content-source refresh [OPTIONS]", "hammer alternate-content-source update [OPTIONS]", "hammer ansible [OPTIONS] SUBCOMMAND [ARG]", "hammer ansible inventory [OPTIONS] SUBCOMMAND [ARG]", "hammer ansible inventory hostgroups [OPTIONS]", "hammer ansible inventory hosts [OPTIONS]", "hammer ansible inventory schedule [OPTIONS]", "hammer ansible roles [OPTIONS] SUBCOMMAND [ARG]", "hammer ansible roles <delete|destroy> [OPTIONS]", "hammer ansible roles fetch [OPTIONS]", "hammer ansible roles import [OPTIONS]", "hammer ansible roles <info|show> [OPTIONS]", "hammer ansible roles <list|index> [OPTIONS]", "hammer ansible roles obsolete [OPTIONS]", "hammer ansible roles play-hostgroups [OPTIONS]", "hammer ansible roles play-hosts [OPTIONS]", "hammer ansible roles sync [OPTIONS]", "hammer ansible variables [OPTIONS] SUBCOMMAND [ARG]", "hammer ansible variables add-matcher [OPTIONS]", "hammer ansible variables create [OPTIONS]", "hammer ansible variables <delete|destroy> [OPTIONS]", "hammer ansible variables import [OPTIONS]", "hammer ansible variables <info|show> [OPTIONS]", "hammer ansible variables <list|index> [OPTIONS]", "hammer ansible variables obsolete [OPTIONS]", "hammer ansible variables remove-matcher [OPTIONS]", "hammer ansible variables update [OPTIONS]", "hammer architecture [OPTIONS] SUBCOMMAND [ARG]", "hammer architecture add-operatingsystem [OPTIONS]", "hammer architecture create [OPTIONS]", "hammer architecture <delete|destroy> [OPTIONS]", "hammer architecture <info|show> [OPTIONS]", "hammer architecture <list|index> [OPTIONS]", "hammer architecture remove-operatingsystem [OPTIONS]", "hammer architecture update [OPTIONS]", "hammer arf-report [OPTIONS] SUBCOMMAND [ARG]", "hammer arf-report <delete|destroy> [OPTIONS]", "hammer arf-report download [OPTIONS]", "hammer arf-report download-html [OPTIONS]", "hammer arf-report <info|show> [OPTIONS]", "hammer arf-report <list|index> [OPTIONS]", "hammer audit [OPTIONS] SUBCOMMAND [ARG]", "hammer audit <info|show> [OPTIONS]", "hammer audit <list|index> [OPTIONS]", "hammer auth [OPTIONS] SUBCOMMAND [ARG]", "hammer auth login [OPTIONS] SUBCOMMAND [ARG]", "hammer auth login basic [OPTIONS]", "hammer auth login basic-external [OPTIONS]", "hammer auth login negotiate [OPTIONS]", "hammer auth login oauth [OPTIONS]", "hammer auth logout [OPTIONS]", "hammer auth status [OPTIONS]", "hammer auth-source [OPTIONS] SUBCOMMAND [ARG]", "hammer auth-source external [OPTIONS] SUBCOMMAND [ARG]", "hammer auth-source external <info|show> [OPTIONS]", "hammer auth-source external <list|index> [OPTIONS]", "hammer auth-source external update [OPTIONS]", "hammer auth-source ldap [OPTIONS] SUBCOMMAND [ARG]", "hammer auth-source ldap create [OPTIONS]", "hammer auth-source ldap <delete|destroy> [OPTIONS]", "hammer auth-source ldap <info|show> [OPTIONS]", "hammer auth-source ldap <list|index> [OPTIONS]", "hammer auth-source ldap update [OPTIONS]", "hammer auth-source <list|index> [OPTIONS]", "hammer bookmark [OPTIONS] SUBCOMMAND [ARG]", "hammer bookmark create [OPTIONS]", "hammer bookmark <delete|destroy> [OPTIONS]", "hammer bookmark <info|show> [OPTIONS]", "hammer bookmark <list|index> [OPTIONS]", "hammer bookmark update [OPTIONS]", "hammer bootdisk [OPTIONS] SUBCOMMAND [ARG]", "hammer bootdisk generic [OPTIONS]", "hammer bootdisk host [OPTIONS]", "hammer bootdisk subnet [OPTIONS]", "hammer capsule [OPTIONS] SUBCOMMAND [ARG]", "hammer capsule content [OPTIONS] SUBCOMMAND [ARG]", "hammer capsule content add-lifecycle-environment [OPTIONS]", "hammer capsule content available-lifecycle-environments [OPTIONS]", "hammer capsule content cancel-synchronization [OPTIONS]", "hammer capsule content info [OPTIONS]", "hammer capsule content lifecycle-environments [OPTIONS]", "hammer capsule content reclaim-space [OPTIONS]", "hammer capsule content remove-lifecycle-environment [OPTIONS]", "hammer capsule content synchronization-status [OPTIONS]", "hammer capsule content synchronize [OPTIONS]", "hammer capsule content update-counts [OPTIONS]", "hammer capsule content verify-checksum [OPTIONS]", "hammer capsule create [OPTIONS]", "hammer capsule <delete|destroy> [OPTIONS]", "hammer capsule import-subnets [OPTIONS]", "hammer capsule <info|show> [OPTIONS]", "hammer capsule <list|index> [OPTIONS]", "hammer capsule refresh-features [OPTIONS]", "hammer capsule update [OPTIONS]", "hammer compute-profile [OPTIONS] SUBCOMMAND [ARG]", "hammer compute-profile create [OPTIONS]", "hammer compute-profile <delete|destroy> [OPTIONS]", "hammer compute-profile <info|show> [OPTIONS]", "hammer compute-profile <list|index> [OPTIONS]", "hammer compute-profile update [OPTIONS]", "hammer compute-profile values [OPTIONS] SUBCOMMAND [ARG]", "hammer compute-profile values add-interface [OPTIONS]", "hammer compute-profile values add-volume [OPTIONS]", "hammer compute-profile values create [OPTIONS]", "hammer compute-profile values remove-interface [OPTIONS]", "hammer compute-profile values remove-volume [OPTIONS]", "hammer compute-profile values update [OPTIONS]", "hammer compute-profile values update-interface [OPTIONS]", "hammer compute-profile values update-volume [OPTIONS]", "hammer compute-resource [OPTIONS] SUBCOMMAND [ARG]", "hammer compute-resource associate-vms [OPTIONS]", "hammer compute-resource clusters [OPTIONS]", "hammer compute-resource create [OPTIONS]", "hammer compute-resource <delete|destroy> [OPTIONS]", "hammer compute-resource flavors [OPTIONS]", "hammer compute-resource folders [OPTIONS]", "hammer compute-resource image [OPTIONS] SUBCOMMAND [ARG]", "hammer compute-resource image available [OPTIONS]", "hammer compute-resource image create [OPTIONS]", "hammer compute-resource image <delete|destroy> [OPTIONS]", "hammer compute-resource image <info|show> [OPTIONS]", "hammer compute-resource image <list|index> [OPTIONS]", "hammer compute-resource image update [OPTIONS]", "hammer compute-resource images [OPTIONS]", "hammer compute-resource <info|show> [OPTIONS]", "hammer compute-resource <list|index> [OPTIONS]", "hammer compute-resource networks [OPTIONS]", "hammer compute-resource resource-pools [OPTIONS]", "hammer compute-resource security-groups [OPTIONS]", "hammer compute-resource storage-domains [OPTIONS]", "hammer compute-resource storage-pods [OPTIONS]", "hammer compute-resource update [OPTIONS]", "hammer compute-resource virtual-machine [OPTIONS] SUBCOMMAND [ARG]", "hammer compute-resource virtual-machine <delete|destroy> [OPTIONS]", "hammer compute-resource virtual-machine <info|show> [OPTIONS]", "hammer compute-resource virtual-machine power [OPTIONS]", "hammer compute-resource virtual-machines [OPTIONS]", "hammer compute-resource vnic-profiles [OPTIONS]", "hammer compute-resource zones [OPTIONS]", "hammer config-report [OPTIONS] SUBCOMMAND [ARG]", "hammer config-report <delete|destroy> [OPTIONS]", "hammer config-report <info|show> [OPTIONS]", "hammer config-report <list|index> [OPTIONS]", "hammer content-credentials [OPTIONS] SUBCOMMAND [ARG]", "hammer content-credentials create [OPTIONS]", "hammer content-credentials <delete|destroy> [OPTIONS]", "hammer content-credentials <info|show> [OPTIONS]", "hammer content-credentials <list|index> [OPTIONS]", "hammer content-credentials update [OPTIONS]", "hammer content-export [OPTIONS] SUBCOMMAND [ARG]", "hammer content-export complete [OPTIONS] SUBCOMMAND [ARG]", "hammer content-export complete library [OPTIONS]", "hammer content-export complete repository [OPTIONS]", "hammer content-export complete version [OPTIONS]", "hammer content-export generate-listing [OPTIONS]", "hammer content-export generate-metadata [OPTIONS]", "hammer content-export incremental [OPTIONS] SUBCOMMAND [ARG]", "hammer content-export incremental library [OPTIONS]", "hammer content-export incremental repository [OPTIONS]", "hammer content-export incremental version [OPTIONS]", "hammer content-export <list|index> [OPTIONS]", "hammer content-import [OPTIONS] SUBCOMMAND [ARG]", "hammer content-import library [OPTIONS]", "hammer content-import <list|index> [OPTIONS]", "hammer content-import repository [OPTIONS]", "hammer content-import version [OPTIONS]", "hammer content-units [OPTIONS] SUBCOMMAND [ARG]", "hammer content-units <info|show> [OPTIONS]", "hammer content-units <list|index> [OPTIONS]", "hammer content-view [OPTIONS] SUBCOMMAND [ARG]", "hammer content-view add-repository [OPTIONS]", "hammer content-view add-version [OPTIONS]", "hammer content-view component [OPTIONS] SUBCOMMAND [ARG]", "hammer content-view component add [OPTIONS]", "hammer content-view component <list|index> [OPTIONS]", "hammer content-view component remove [OPTIONS]", "hammer content-view component update [OPTIONS]", "hammer content-view copy [OPTIONS]", "hammer content-view create [OPTIONS]", "hammer content-view delete [OPTIONS]", "hammer content-view filter [OPTIONS] SUBCOMMAND [ARG]", "hammer content-view filter add-repository [OPTIONS]", "hammer content-view filter create [OPTIONS]", "hammer content-view filter <delete|destroy> [OPTIONS]", "hammer content-view filter <info|show> [OPTIONS]", "hammer content-view filter <list|index> [OPTIONS]", "hammer content-view filter remove-repository [OPTIONS]", "hammer content-view filter rule [OPTIONS] SUBCOMMAND [ARG]", "hammer content-view filter rule create [OPTIONS]", "hammer content-view filter rule <delete|destroy> [OPTIONS]", "hammer content-view filter rule <info|show> [OPTIONS]", "hammer content-view filter rule <list|index> [OPTIONS]", "hammer content-view filter rule update [OPTIONS]", "hammer content-view filter update [OPTIONS]", "hammer content-view <info|show> [OPTIONS]", "hammer content-view <list|index> [OPTIONS]", "hammer content-view publish [OPTIONS]", "hammer content-view purge [OPTIONS]", "hammer content-view remove [OPTIONS]", "hammer content-view remove-from-environment [OPTIONS]", "hammer content-view remove-repository [OPTIONS]", "hammer content-view remove-version [OPTIONS]", "hammer content-view update [OPTIONS]", "hammer content-view version [OPTIONS] SUBCOMMAND [ARG]", "hammer content-view version delete [OPTIONS]", "hammer content-view version incremental-update [OPTIONS]", "hammer content-view version <info|show> [OPTIONS]", "hammer content-view version <list|index> [OPTIONS]", "hammer content-view version promote [OPTIONS]", "hammer content-view version republish-repositories [OPTIONS]", "hammer content-view version update [OPTIONS]", "hammer content-view version verify-checksum [OPTIONS]", "hammer deb-package [OPTIONS] SUBCOMMAND [ARG]", "hammer deb-package <info|show> [OPTIONS]", "hammer deb-package <list|index> [OPTIONS]", "hammer defaults [OPTIONS] SUBCOMMAND [ARG]", "hammer defaults add [OPTIONS]", "hammer defaults delete [OPTIONS]", "hammer defaults list [OPTIONS]", "hammer defaults providers [OPTIONS]", "hammer discovery [OPTIONS] SUBCOMMAND [ARG]", "hammer discovery auto-provision [OPTIONS]", "hammer discovery <delete|destroy> [OPTIONS]", "hammer discovery facts [OPTIONS]", "hammer discovery <info|show> [OPTIONS]", "hammer discovery <list|index> [OPTIONS]", "hammer discovery provision [OPTIONS]", "hammer discovery reboot [OPTIONS]", "hammer discovery refresh-facts [OPTIONS]", "hammer discovery-rule [OPTIONS] SUBCOMMAND [ARG]", "hammer discovery-rule create [OPTIONS]", "hammer discovery-rule <delete|destroy> [OPTIONS]", "hammer discovery-rule <info|show> [OPTIONS]", "hammer discovery-rule <list|index> [OPTIONS]", "hammer discovery-rule update [OPTIONS]", "hammer docker [OPTIONS] SUBCOMMAND [ARG]", "hammer docker manifest [OPTIONS] SUBCOMMAND [ARG]", "hammer docker manifest <info|show> [OPTIONS]", "hammer docker manifest <list|index> [OPTIONS]", "hammer docker tag [OPTIONS] SUBCOMMAND [ARG]", "hammer docker tag <info|show> [OPTIONS]", "hammer docker tag <list|index> [OPTIONS]", "hammer domain [OPTIONS] SUBCOMMAND [ARG]", "hammer domain create [OPTIONS]", "hammer domain <delete|destroy> [OPTIONS]", "hammer domain delete-parameter [OPTIONS]", "hammer domain <info|show> [OPTIONS]", "hammer domain <list|index> [OPTIONS]", "hammer domain set-parameter [OPTIONS]", "hammer domain update [OPTIONS]", "hammer erratum [OPTIONS] SUBCOMMAND [ARG]", "hammer erratum info [OPTIONS]", "hammer erratum <list|index> [OPTIONS]", "hammer export-templates [OPTIONS]", "hammer fact [OPTIONS] SUBCOMMAND [ARG]", "hammer fact <list|index> [OPTIONS]", "hammer file [OPTIONS] SUBCOMMAND [ARG]", "hammer file <info|show> [OPTIONS]", "hammer file <list|index> [OPTIONS]", "hammer filter [OPTIONS] SUBCOMMAND [ARG]", "hammer filter available-permissions [OPTIONS]", "hammer filter available-resources [OPTIONS]", "hammer filter create [OPTIONS]", "hammer filter <delete|destroy> [OPTIONS]", "hammer filter <info|show> [OPTIONS]", "hammer filter <list|index> [OPTIONS]", "hammer filter update [OPTIONS]", "hammer foreign-input-set [OPTIONS] SUBCOMMAND [ARG]", "hammer foreign-input-set create [OPTIONS]", "hammer foreign-input-set <delete|destroy> [OPTIONS]", "hammer foreign-input-set <info|show> [OPTIONS]", "hammer foreign-input-set <list|index> [OPTIONS]", "hammer foreign-input-set update [OPTIONS]", "hammer full-help [OPTIONS]", "hammer global-parameter [OPTIONS] SUBCOMMAND [ARG]", "hammer global-parameter <delete|destroy> [OPTIONS]", "hammer global-parameter <list|index> [OPTIONS]", "hammer global-parameter set [OPTIONS]", "hammer host [OPTIONS] SUBCOMMAND [ARG]", "hammer host ansible-roles [OPTIONS] SUBCOMMAND [ARG]", "hammer host ansible-roles add [OPTIONS]", "hammer host ansible-roles assign [OPTIONS]", "hammer host ansible-roles <list|index> [OPTIONS]", "hammer host ansible-roles play [OPTIONS]", "hammer host ansible-roles remove [OPTIONS]", "hammer host boot [OPTIONS]", "hammer host config-reports [OPTIONS]", "hammer host create [OPTIONS]", "hammer host deb-package [OPTIONS] SUBCOMMAND [ARG]", "hammer host deb-package <list|index> [OPTIONS]", "hammer host <delete|destroy> [OPTIONS]", "hammer host delete-parameter [OPTIONS]", "hammer host disassociate [OPTIONS]", "hammer host enc-dump [OPTIONS]", "hammer host errata [OPTIONS] SUBCOMMAND [ARG]", "hammer host errata apply [OPTIONS]", "hammer host errata info [OPTIONS]", "hammer host errata list [OPTIONS]", "hammer host errata recalculate [OPTIONS]", "hammer host facts [OPTIONS]", "hammer host <info|show> [OPTIONS]", "hammer host interface [OPTIONS] SUBCOMMAND [ARG]", "hammer host interface create [OPTIONS]", "hammer host interface <delete|destroy> [OPTIONS]", "hammer host interface <info|show> [OPTIONS]", "hammer host interface <list|index> [OPTIONS]", "hammer host interface update [OPTIONS]", "hammer host <list|index> [OPTIONS]", "hammer host package [OPTIONS] SUBCOMMAND [ARG]", "hammer host package install [OPTIONS]", "hammer host package <list|index> [OPTIONS]", "hammer host package remove [OPTIONS]", "hammer host package upgrade [OPTIONS]", "hammer host package upgrade-all [OPTIONS]", "hammer host package-group [OPTIONS] SUBCOMMAND [ARG]", "hammer host package-group install [OPTIONS]", "hammer host package-group remove [OPTIONS]", "hammer host policies-enc [OPTIONS]", "hammer host reboot [OPTIONS]", "hammer host rebuild-config [OPTIONS]", "hammer host reports [OPTIONS]", "hammer host reset [OPTIONS]", "hammer host set-parameter [OPTIONS]", "hammer host start [OPTIONS]", "hammer host status [OPTIONS]", "hammer host stop [OPTIONS]", "hammer host subscription [OPTIONS] SUBCOMMAND [ARG]", "hammer host subscription attach [OPTIONS]", "hammer host subscription auto-attach [OPTIONS]", "hammer host subscription content-override [OPTIONS]", "hammer host subscription enabled-repositories [OPTIONS]", "hammer host subscription product-content [OPTIONS]", "hammer host subscription register [OPTIONS]", "hammer host subscription remove [OPTIONS]", "hammer host subscription unregister [OPTIONS]", "hammer host traces [OPTIONS] SUBCOMMAND [ARG]", "hammer host traces list [OPTIONS]", "hammer host traces resolve [OPTIONS]", "hammer host update [OPTIONS]", "hammer host-collection [OPTIONS] SUBCOMMAND [ARG]", "hammer host-collection add-host [OPTIONS]", "hammer host-collection copy [OPTIONS]", "hammer host-collection create [OPTIONS]", "hammer host-collection <delete|destroy> [OPTIONS]", "hammer host-collection erratum [OPTIONS] SUBCOMMAND [ARG]", "hammer host-collection erratum install [OPTIONS]", "hammer host-collection hosts [OPTIONS]", "hammer host-collection <info|show> [OPTIONS]", "hammer host-collection <list|index> [OPTIONS]", "hammer host-collection package [OPTIONS] SUBCOMMAND [ARG]", "hammer host-collection package install [OPTIONS]", "hammer host-collection package remove [OPTIONS]", "hammer host-collection package update [OPTIONS]", "hammer host-collection package-group [OPTIONS] SUBCOMMAND [ARG]", "hammer host-collection package-group install [OPTIONS]", "hammer host-collection package-group remove [OPTIONS]", "hammer host-collection package-group update [OPTIONS]", "hammer host-collection remove-host [OPTIONS]", "hammer host-collection update [OPTIONS]", "hammer host-registration [OPTIONS] SUBCOMMAND [ARG]", "hammer host-registration generate-command [OPTIONS]", "hammer hostgroup [OPTIONS] SUBCOMMAND [ARG]", "hammer hostgroup ansible-roles [OPTIONS] SUBCOMMAND [ARG]", "hammer hostgroup ansible-roles add [OPTIONS]", "hammer hostgroup ansible-roles assign [OPTIONS]", "hammer hostgroup ansible-roles <list|index> [OPTIONS]", "hammer hostgroup ansible-roles play [OPTIONS]", "hammer hostgroup ansible-roles remove [OPTIONS]", "hammer hostgroup create [OPTIONS]", "hammer hostgroup <delete|destroy> [OPTIONS]", "hammer hostgroup delete-parameter [OPTIONS]", "hammer hostgroup <info|show> [OPTIONS]", "hammer hostgroup <list|index> [OPTIONS]", "hammer hostgroup rebuild-config [OPTIONS]", "hammer hostgroup set-parameter [OPTIONS]", "hammer hostgroup update [OPTIONS]", "hammer http-proxy [OPTIONS] SUBCOMMAND [ARG]", "hammer http-proxy create [OPTIONS]", "hammer http-proxy <delete|destroy> [OPTIONS]", "hammer http-proxy <info|show> [OPTIONS]", "hammer http-proxy <list|index> [OPTIONS]", "hammer http-proxy update [OPTIONS]", "hammer import-templates [OPTIONS]", "hammer job-invocation [OPTIONS] SUBCOMMAND [ARG]", "hammer job-invocation cancel [OPTIONS]", "hammer job-invocation create [OPTIONS]", "hammer job-invocation <info|show> [OPTIONS]", "hammer job-invocation <list|index> [OPTIONS]", "hammer job-invocation output [OPTIONS]", "hammer job-invocation rerun [OPTIONS]", "hammer job-template [OPTIONS] SUBCOMMAND [ARG]", "hammer job-template create [OPTIONS]", "hammer job-template <delete|destroy> [OPTIONS]", "hammer job-template dump [OPTIONS]", "hammer job-template export [OPTIONS]", "hammer job-template import [OPTIONS]", "hammer job-template <info|show> [OPTIONS]", "hammer job-template <list|index> [OPTIONS]", "hammer job-template update [OPTIONS]", "hammer lifecycle-environment [OPTIONS] SUBCOMMAND [ARG]", "hammer lifecycle-environment create [OPTIONS]", "hammer lifecycle-environment <delete|destroy> [OPTIONS]", "hammer lifecycle-environment <info|show> [OPTIONS]", "hammer lifecycle-environment <list|index> [OPTIONS]", "hammer lifecycle-environment paths [OPTIONS]", "hammer lifecycle-environment update [OPTIONS]", "hammer location [OPTIONS] SUBCOMMAND [ARG]", "hammer location add-compute-resource [OPTIONS]", "hammer location add-domain [OPTIONS]", "hammer location add-hostgroup [OPTIONS]", "hammer location add-medium [OPTIONS]", "hammer location add-organization [OPTIONS]", "hammer location add-provisioning-template [OPTIONS]", "hammer location add-smart-proxy [OPTIONS]", "hammer location add-subnet [OPTIONS]", "hammer location add-user [OPTIONS]", "hammer location create [OPTIONS]", "hammer location <delete|destroy> [OPTIONS]", "hammer location delete-parameter [OPTIONS]", "hammer location <info|show> [OPTIONS]", "hammer location <list|index> [OPTIONS]", "hammer location remove-compute-resource [OPTIONS]", "hammer location remove-domain [OPTIONS]", "hammer location remove-hostgroup [OPTIONS]", "hammer location remove-medium [OPTIONS]", "hammer location remove-organization [OPTIONS]", "hammer location remove-provisioning-template [OPTIONS]", "hammer location remove-smart-proxy [OPTIONS]", "hammer location remove-subnet [OPTIONS]", "hammer location remove-user [OPTIONS]", "hammer location set-parameter [OPTIONS]", "hammer location update [OPTIONS]", "hammer mail-notification [OPTIONS] SUBCOMMAND [ARG]", "hammer mail-notification <info|show> [OPTIONS]", "hammer mail-notification <list|index> [OPTIONS]", "hammer medium [OPTIONS] SUBCOMMAND [ARG]", "hammer medium add-operatingsystem [OPTIONS]", "hammer medium create [OPTIONS]", "hammer medium <delete|destroy> [OPTIONS]", "hammer medium <info|show> [OPTIONS]", "hammer medium <list|index> [OPTIONS]", "hammer medium remove-operatingsystem [OPTIONS]", "hammer medium update [OPTIONS]", "hammer model [OPTIONS] SUBCOMMAND [ARG]", "hammer model create [OPTIONS]", "hammer model <delete|destroy> [OPTIONS]", "hammer model <info|show> [OPTIONS]", "hammer model <list|index> [OPTIONS]", "hammer model update [OPTIONS]", "hammer module-stream [OPTIONS] SUBCOMMAND [ARG]", "hammer module-stream <info|show> [OPTIONS]", "hammer module-stream <list|index> [OPTIONS]", "hammer organization [OPTIONS] SUBCOMMAND [ARG]", "hammer organization add-compute-resource [OPTIONS]", "hammer organization add-domain [OPTIONS]", "hammer organization add-hostgroup [OPTIONS]", "hammer organization add-location [OPTIONS]", "hammer organization add-medium [OPTIONS]", "hammer organization add-provisioning-template [OPTIONS]", "hammer organization add-smart-proxy [OPTIONS]", "hammer organization add-subnet [OPTIONS]", "hammer organization add-user [OPTIONS]", "hammer organization configure-cdn [OPTIONS]", "hammer organization create [OPTIONS]", "hammer organization <delete|destroy> [OPTIONS]", "hammer organization delete-parameter [OPTIONS]", "hammer organization <info|show> [OPTIONS]", "hammer organization <list|index> [OPTIONS]", "hammer organization remove-compute-resource [OPTIONS]", "hammer organization remove-domain [OPTIONS]", "hammer organization remove-hostgroup [OPTIONS]", "hammer organization remove-location [OPTIONS]", "hammer organization remove-medium [OPTIONS]", "hammer organization remove-provisioning-template [OPTIONS]", "hammer organization remove-smart-proxy [OPTIONS]", "hammer organization remove-subnet [OPTIONS]", "hammer organization remove-user [OPTIONS]", "hammer organization set-parameter [OPTIONS]", "hammer organization update [OPTIONS]", "hammer os [OPTIONS] SUBCOMMAND [ARG]", "hammer os add-architecture [OPTIONS]", "hammer os add-provisioning-template [OPTIONS]", "hammer os add-ptable [OPTIONS]", "hammer os create [OPTIONS]", "hammer os <delete|destroy> [OPTIONS]", "hammer os delete-default-template [OPTIONS]", "hammer os delete-parameter [OPTIONS]", "hammer os <info|show> [OPTIONS]", "hammer os <list|index> [OPTIONS]", "hammer os remove-architecture [OPTIONS]", "hammer os remove-provisioning-template [OPTIONS]", "hammer os remove-ptable [OPTIONS]", "hammer os set-default-template [OPTIONS]", "hammer os set-parameter [OPTIONS]", "hammer os update [OPTIONS]", "hammer package [OPTIONS] SUBCOMMAND [ARG]", "hammer package <info|show> [OPTIONS]", "hammer package <list|index> [OPTIONS]", "hammer package-group [OPTIONS] SUBCOMMAND [ARG]", "hammer package-group <info|show> [OPTIONS]", "hammer package-group <list|index> [OPTIONS]", "hammer partition-table [OPTIONS] SUBCOMMAND [ARG]", "hammer partition-table add-operatingsystem [OPTIONS]", "hammer partition-table create [OPTIONS]", "hammer partition-table <delete|destroy> [OPTIONS]", "hammer partition-table dump [OPTIONS]", "hammer partition-table export [OPTIONS]", "hammer partition-table import [OPTIONS]", "hammer partition-table <info|show> [OPTIONS]", "hammer partition-table <list|index> [OPTIONS]", "hammer partition-table remove-operatingsystem [OPTIONS]", "hammer partition-table update [OPTIONS]", "hammer ping [OPTIONS] [SUBCOMMAND] [ARG]", "hammer ping foreman [OPTIONS]", "hammer ping katello [OPTIONS]", "hammer policy [OPTIONS] SUBCOMMAND [ARG]", "hammer policy create [OPTIONS]", "hammer policy <delete|destroy> [OPTIONS]", "hammer policy hosts [OPTIONS]", "hammer policy <info|show> [OPTIONS]", "hammer policy <list|index> [OPTIONS]", "hammer policy update [OPTIONS]", "hammer prebuild-bash-completion [OPTIONS]", "hammer preupgrade-report [OPTIONS] SUBCOMMAND [ARG]", "hammer preupgrade-report <info|show> [OPTIONS]", "hammer preupgrade-report job-invocation [OPTIONS]", "hammer preupgrade-report <list|index> [OPTIONS]", "hammer product [OPTIONS] SUBCOMMAND [ARG]", "hammer product create [OPTIONS]", "hammer product <delete|destroy> [OPTIONS]", "hammer product <info|show> [OPTIONS]", "hammer product <list|index> [OPTIONS]", "hammer product remove-sync-plan [OPTIONS]", "hammer product set-sync-plan [OPTIONS]", "hammer product synchronize [OPTIONS]", "hammer product update [OPTIONS]", "hammer product update-proxy [OPTIONS]", "hammer product verify-checksum [OPTIONS]", "hammer proxy [OPTIONS] SUBCOMMAND [ARG]", "hammer proxy content [OPTIONS] SUBCOMMAND [ARG]", "hammer proxy content add-lifecycle-environment [OPTIONS]", "hammer proxy content available-lifecycle-environments [OPTIONS]", "hammer proxy content cancel-synchronization [OPTIONS]", "hammer proxy content info [OPTIONS]", "hammer proxy content lifecycle-environments [OPTIONS]", "hammer proxy content reclaim-space [OPTIONS]", "hammer proxy content remove-lifecycle-environment [OPTIONS]", "hammer proxy content synchronization-status [OPTIONS]", "hammer proxy content synchronize [OPTIONS]", "hammer proxy content update-counts [OPTIONS]", "hammer proxy content verify-checksum [OPTIONS]", "hammer proxy create [OPTIONS]", "hammer proxy <delete|destroy> [OPTIONS]", "hammer proxy import-subnets [OPTIONS]", "hammer proxy <info|show> [OPTIONS]", "hammer proxy <list|index> [OPTIONS]", "hammer proxy refresh-features [OPTIONS]", "hammer proxy update [OPTIONS]", "hammer realm [OPTIONS] SUBCOMMAND [ARG]", "hammer realm create [OPTIONS]", "hammer realm <delete|destroy> [OPTIONS]", "hammer realm <info|show> [OPTIONS]", "hammer realm <list|index> [OPTIONS]", "hammer realm update [OPTIONS]", "hammer recurring-logic [OPTIONS] SUBCOMMAND [ARG]", "hammer recurring-logic cancel [OPTIONS]", "hammer recurring-logic delete [OPTIONS]", "hammer recurring-logic <info|show> [OPTIONS]", "hammer recurring-logic <list|index> [OPTIONS]", "hammer remote-execution-feature [OPTIONS] SUBCOMMAND [ARG]", "hammer remote-execution-feature <info|show> [OPTIONS]", "hammer remote-execution-feature <list|index> [OPTIONS]", "hammer remote-execution-feature update [OPTIONS]", "hammer report [OPTIONS] SUBCOMMAND [ARG]", "hammer report <delete|destroy> [OPTIONS]", "hammer report <info|show> [OPTIONS]", "hammer report <list|index> [OPTIONS]", "hammer report-template [OPTIONS] SUBCOMMAND [ARG]", "hammer report-template clone [OPTIONS]", "hammer report-template create [OPTIONS]", "hammer report-template <delete|destroy> [OPTIONS]", "hammer report-template dump [OPTIONS]", "hammer report-template export [OPTIONS]", "hammer report-template generate [OPTIONS]", "hammer report-template import [OPTIONS]", "hammer report-template <info|show> [OPTIONS]", "hammer report-template <list|index> [OPTIONS]", "hammer report-template report-data [OPTIONS]", "hammer report-template schedule [OPTIONS]", "hammer report-template update [OPTIONS]", "hammer repository [OPTIONS] SUBCOMMAND [ARG]", "hammer repository create [OPTIONS]", "hammer repository <delete|destroy> [OPTIONS]", "hammer repository <info|show> [OPTIONS]", "hammer repository <list|index> [OPTIONS]", "hammer repository reclaim-space [OPTIONS]", "hammer repository remove-content [OPTIONS]", "hammer repository republish [OPTIONS]", "hammer repository synchronize [OPTIONS]", "hammer repository types [OPTIONS]", "hammer repository update [OPTIONS]", "hammer repository upload-content [OPTIONS]", "hammer repository verify-checksum [OPTIONS]", "hammer repository-set [OPTIONS] SUBCOMMAND [ARG]", "hammer repository-set available-repositories [OPTIONS]", "hammer repository-set disable [OPTIONS]", "hammer repository-set enable [OPTIONS]", "hammer repository-set <info|show> [OPTIONS]", "hammer repository-set <list|index> [OPTIONS]", "hammer role [OPTIONS] SUBCOMMAND [ARG]", "hammer role clone [OPTIONS]", "hammer role create [OPTIONS]", "hammer role <delete|destroy> [OPTIONS]", "hammer role filters [OPTIONS]", "hammer role <info|show> [OPTIONS]", "hammer role <list|index> [OPTIONS]", "hammer role update [OPTIONS]", "hammer scap-content [OPTIONS] SUBCOMMAND [ARG]", "hammer scap-content bulk-upload [OPTIONS]", "hammer scap-content create [OPTIONS]", "hammer scap-content <delete|destroy> [OPTIONS]", "hammer scap-content download [OPTIONS]", "hammer scap-content <info|show> [OPTIONS]", "hammer scap-content <list|index> [OPTIONS]", "hammer scap-content update [OPTIONS]", "hammer scap-content-profile [OPTIONS] SUBCOMMAND [ARG]", "hammer scap-content-profile <list|index> [OPTIONS]", "hammer settings [OPTIONS] SUBCOMMAND [ARG]", "hammer settings <info|show> [OPTIONS]", "hammer settings <list|index> [OPTIONS]", "hammer settings set [OPTIONS]", "hammer shell [OPTIONS]", "hammer srpm [OPTIONS] SUBCOMMAND [ARG]", "hammer srpm <info|show> [OPTIONS]", "hammer srpm <list|index> [OPTIONS]", "hammer status [OPTIONS] [SUBCOMMAND] [ARG]", "hammer status foreman [OPTIONS]", "hammer status katello [OPTIONS]", "hammer subnet [OPTIONS] SUBCOMMAND [ARG]", "hammer subnet create [OPTIONS]", "hammer subnet <delete|destroy> [OPTIONS]", "hammer subnet delete-parameter [OPTIONS]", "hammer subnet <info|show> [OPTIONS]", "hammer subnet <list|index> [OPTIONS]", "hammer subnet set-parameter [OPTIONS]", "hammer subnet update [OPTIONS]", "hammer subscription [OPTIONS] SUBCOMMAND [ARG]", "hammer subscription delete-manifest [OPTIONS]", "hammer subscription <list|index> [OPTIONS]", "hammer subscription manifest-history [OPTIONS]", "hammer subscription refresh-manifest [OPTIONS]", "hammer subscription upload [OPTIONS]", "hammer sync-plan [OPTIONS] SUBCOMMAND [ARG]", "hammer sync-plan create [OPTIONS]", "hammer sync-plan <delete|destroy> [OPTIONS]", "hammer sync-plan <info|show> [OPTIONS]", "hammer sync-plan <list|index> [OPTIONS]", "hammer sync-plan update [OPTIONS]", "hammer tailoring-file [OPTIONS] SUBCOMMAND [ARG]", "hammer tailoring-file create [OPTIONS]", "hammer tailoring-file <delete|destroy> [OPTIONS]", "hammer tailoring-file download [OPTIONS]", "hammer tailoring-file <info|show> [OPTIONS]", "hammer tailoring-file <list|index> [OPTIONS]", "hammer tailoring-file update [OPTIONS]", "hammer task [OPTIONS] SUBCOMMAND [ARG]", "hammer task <info|show> [OPTIONS]", "hammer task <list|index> [OPTIONS]", "hammer task progress [OPTIONS]", "hammer task resume [OPTIONS]", "hammer template [OPTIONS] SUBCOMMAND [ARG]", "hammer template add-operatingsystem [OPTIONS]", "hammer template build-pxe-default [OPTIONS]", "hammer template clone [OPTIONS]", "hammer template combination [OPTIONS] SUBCOMMAND [ARG]", "hammer template combination create [OPTIONS]", "hammer template combination <delete|destroy> [OPTIONS]", "hammer template combination <info|show> [OPTIONS]", "hammer template combination <list|index> [OPTIONS]", "hammer template combination update [OPTIONS]", "hammer template create [OPTIONS]", "hammer template <delete|destroy> [OPTIONS]", "hammer template dump [OPTIONS]", "hammer template export [OPTIONS]", "hammer template import [OPTIONS]", "hammer template <info|show> [OPTIONS]", "hammer template kinds [OPTIONS]", "hammer template <list|index> [OPTIONS]", "hammer template remove-operatingsystem [OPTIONS]", "hammer template update [OPTIONS]", "hammer template-input [OPTIONS] SUBCOMMAND [ARG]", "hammer template-input create [OPTIONS]", "hammer template-input <delete|destroy> [OPTIONS]", "hammer template-input <info|show> [OPTIONS]", "hammer template-input <list|index> [OPTIONS]", "hammer template-input update [OPTIONS]", "hammer user [OPTIONS] SUBCOMMAND [ARG]", "hammer user access-token [OPTIONS] SUBCOMMAND [ARG]", "hammer user access-token create [OPTIONS]", "hammer user access-token <info|show> [OPTIONS]", "hammer user access-token <list|index> [OPTIONS]", "hammer user access-token revoke [OPTIONS]", "hammer user add-role [OPTIONS]", "hammer user create [OPTIONS]", "hammer user <delete|destroy> [OPTIONS]", "hammer user <info|show> [OPTIONS]", "hammer user <list|index> [OPTIONS]", "hammer user mail-notification [OPTIONS] SUBCOMMAND [ARG]", "hammer user mail-notification add [OPTIONS]", "hammer user mail-notification <list|index> [OPTIONS]", "hammer user mail-notification remove [OPTIONS]", "hammer user mail-notification update [OPTIONS]", "hammer user remove-role [OPTIONS]", "hammer user ssh-keys [OPTIONS] SUBCOMMAND [ARG]", "hammer user ssh-keys add [OPTIONS]", "hammer user ssh-keys <delete|destroy> [OPTIONS]", "hammer user ssh-keys <info|show> [OPTIONS]", "hammer user ssh-keys <list|index> [OPTIONS]", "hammer user table-preference [OPTIONS] SUBCOMMAND [ARG]", "hammer user table-preference create [OPTIONS]", "hammer user table-preference <delete|destroy> [OPTIONS]", "hammer user table-preference <info|show> [OPTIONS]", "hammer user table-preference <list|index> [OPTIONS]", "hammer user table-preference update [OPTIONS]", "hammer user update [OPTIONS]", "hammer user-group [OPTIONS] SUBCOMMAND [ARG]", "hammer user-group add-role [OPTIONS]", "hammer user-group add-user [OPTIONS]", "hammer user-group add-user-group [OPTIONS]", "hammer user-group create [OPTIONS]", "hammer user-group <delete|destroy> [OPTIONS]", "hammer user-group external [OPTIONS] SUBCOMMAND [ARG]", "hammer user-group external create [OPTIONS]", "hammer user-group external <delete|destroy> [OPTIONS]", "hammer user-group external <info|show> [OPTIONS]", "hammer user-group external <list|index> [OPTIONS]", "hammer user-group external refresh [OPTIONS]", "hammer user-group external update [OPTIONS]", "hammer user-group <info|show> [OPTIONS]", "hammer user-group <list|index> [OPTIONS]", "hammer user-group remove-role [OPTIONS]", "hammer user-group remove-user [OPTIONS]", "hammer user-group remove-user-group [OPTIONS]", "hammer user-group update [OPTIONS]", "hammer virt-who-config [OPTIONS] SUBCOMMAND [ARG]", "hammer virt-who-config create [OPTIONS]", "hammer virt-who-config <delete|destroy> [OPTIONS]", "hammer virt-who-config deploy [OPTIONS]", "hammer virt-who-config fetch [OPTIONS]", "hammer virt-who-config <info|show> [OPTIONS]", "hammer virt-who-config <list|index> [OPTIONS]", "hammer virt-who-config update [OPTIONS]", "hammer webhook [OPTIONS] SUBCOMMAND [ARG]", "hammer webhook create [OPTIONS]", "hammer webhook <delete|destroy> [OPTIONS]", "hammer webhook <info|show> [OPTIONS]", "hammer webhook <list|index> [OPTIONS]", "hammer webhook update [OPTIONS]", "hammer webhook-template [OPTIONS] SUBCOMMAND [ARG]", "hammer webhook-template clone [OPTIONS]", "hammer webhook-template create [OPTIONS]", "hammer webhook-template <delete|destroy> [OPTIONS]", "hammer webhook-template dump [OPTIONS]", "hammer webhook-template export [OPTIONS]", "hammer webhook-template import [OPTIONS]", "hammer webhook-template <info|show> [OPTIONS]", "hammer webhook-template <list|index> [OPTIONS]", "hammer webhook-template update [OPTIONS]" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html-single/using_the_hammer_cli_tool/index
Chapter 2. Release notes
Chapter 2. Release notes 2.1. Red Hat OpenShift support for Windows Containers release notes The release notes for Red Hat OpenShift for Windows Containers tracks the development of the Windows Machine Config Operator (WMCO), which provides all Windows container workload capabilities in OpenShift Container Platform. 2.1.1. Windows Machine Config Operator numbering Y-stream releases of the WMCO are in step with OpenShift Container Platform, with only z-stream releases between OpenShift Container Platform releases. The WMCO numbering reflects the associated OpenShift Container Platform version in the y-stream position. For example, the current release of WMCO is associated with OpenShift Container Platform version 4.16. Thus, the numbering is WMCO 10.15.z. 2.1.2. Release notes for Red Hat Windows Machine Config Operator 10.16.1 This release of the WMCO provides new features and bug fixes for running Windows compute nodes in an OpenShift Container Platform cluster. The components of the WMCO 10.16.1 were released in RHSA-2024:5749 . 2.1.2.1. Bug fixes Previously, if a Windows VM had its PowerShell ExecutionPolicy set to Restricted , the Windows Instance Config Daemon (WICD) could not run the commands on that VM that are necessary for creating Windows nodes. With this fix, the WICD now bypasses the execution policy on the VM when running PowerShell commands. As a result, the WICD can create Windows nodes on the VM as expected. ( OCPBUGS-37609 ) 2.2. Release notes for past releases of the Windows Machine Config Operator The following release notes are for versions of the Windows Machine Config Operator (WMCO). 2.2.1. Release notes for Red Hat Windows Machine Config Operator 10.16.0 This release of the WMCO provides bug fixes for running Windows compute nodes in an OpenShift Container Platform cluster. The components of the WMCO 10.16.0 were released in RHBA-2024:5014 . 2.2.1.1. New features and improvements 2.2.1.1.1. WMCO is now supported in disconnected networks The WMCO is now supported in environments with disconnected networks, which is a cluster that is intentionally impeded from reaching the internet, also known as restricted or air-gapped clusters. For more information, see Using Windows containers with a mirror registry . 2.2.1.1.2. WMCO can pull images from mirrored registries The WMCO can now use both ImageDigestMirrorSet (IDMS) and ImageTagMirrorSet (ITMS) objects to pull images from mirrored registries. For more information, see Understanding image registry repository mirroring 2.2.1.1.3. Filesystem metrics now display for Windows nodes The Filesystem metrics are now available for Windows nodes in the Utilization tile of the Node details page in the OpenShift Container Platform web console. You can query the metrics by running Prometheus Query Language (PromQL) queries. The charts previously reported No datapoints found . 2.2.1.1.4. Pod network metrics now display for the pods on Windows nodes The Network in and Network out charts are now available for Windows pods on the Pod details page in the OpenShift Container Platform web console. You can query the metrics by running PromQL queries. The charts previously reported No datapoints found . 2.2.1.1.5. Pod CPU and memory metrics now display for the pods on Windows nodes The CPU and memory usage metrics are now available for Windows pods on the Pods and Pod details pages in the OpenShift Container Platform web console. You can query the metrics by running PromQL queries. The chart previously reported No datapoints found . 2.2.1.1.6. Kubernetes upgrade The WMCO now uses Kubernetes 1.29. 2.2.1.2. Bug fixes Because the WICD service account was missing a required secret, the WMCO was unable to properly configure Windows nodes in a Nutanix cluster. With this fix, the WMCO creates a long-lived token secret for the WICD service account. As a result, the WMCO is able to configure a Windows node on Nutanix. ( OCPBUGS-22680 ) Previously, the WMCO performed a sanitization step that incorrectly replaced commas with semicolons in a user's cluster-wide proxy configuration. This behavior caused Windows to ignore the values set in the noProxy environment variable. As a consequence, the WMCO incorrectly sent traffic through the proxy for the endpoints specified in the no-proxy parameter. With this fix, the sanitization step that replaced commas with semicolons was removed. As a result, web requests from a Windows node to a cluster-internal endpoint or an endpoint that exists in the no-proxy parameter do not go through the proxy. ( OCPBUGS-24264 ) Previously, because of bad logic in the networking configuration script, the WMCO was incorrectly reading carriage returns in the containderd CNI configuration file as changes, and identified the file as modified. This bahavior caused the CNI configuration to be unnecessarily reloaded, potentially resulting in container restarts and brief network outages. With this fix, the WMCO now reloads the CNI configuration only when the CNI configuration is actually modified. ( OCPBUGS-2887 ) Previously, because of routing issues present in Windows Server 2019, under certain conditions and after more than one hour of running time, workloads on Windows Server 2019 could have experienced packet loss when communicating with other containers in the cluster. This fix enables Direct Server Return (DSR) routing within kube-proxy. As a result, DSR now causes request and response traffic to use a different network path, circumventing the bug within Windows Server 2019. ( OCPBUGS-26761 ) Previously, the kubelet on Windows nodes was unable to authenticate with private Amazon Elastic Container Registries (ECR). Because of this error, the kubelet was not able to pull images from these registries. With this fix, the kubelet is able to pull images from these registries as expected. ( OCPBUGS-26602 ) Previously, on Azure clusters the WMCO would check if an external Cloud Controller Manager (CCM) was being used on the cluster. If a CCM was being used, the Operator would adjust configuration logic accordingly. Because the status condition that the WMCO used to check for the CCM was removed, the WMCO proceeded as if a CCM was not in use. This fix removes the check. As a result, the WMCO always configures the required logic on Azure clusters. ( OCPBUGS-31626 ) Previously, the WMCO logged error messages when a command that was run through an SSH connection to a Windows instance failed. This behavior was incorrect because some commands are expected to fail. For example, when the WMCO reboots a node, the Operator runs PowerShell commands on the instance until they fail, meaning the SSH connection rebooted as expected. With this fix, only actual errors are now logged. ( OCPBUGS-20255 ) Previously, after rotating the kube-apiserver-to-kubelet-client-ca certificate, the contents of the kubetl-ca.crt file on Windows nodes was not populated correctly. With this fix, after certificate rotation, the kubetl-ca.crt file contains the correct certificates. ( OCPBUGS-22237 ) Previously, because of a missing DNS suffix in the kubelet host name on instances that are part of a Windows AD domain controller, the cloud provider failed to find VMs by name. With this fix, the DNS suffix is now included in the host name resolution. As a result, the WMCO is able to configure and join Windows instances that are part of AD domain controller. ( OCPBUGS-34758 ) Previously, registry certificates provided to the cluster by a user were not loaded into the Windows trust store on each node. As a consequence, image pulls from a mirror registry failed, because a self-signed CA is required. With this fix, registry certificates are loaded into the Windows trust store on each node. As a result, images can be pulled from mirror registries with self-signed CAs. ( OCPBUGS-36408 ) Previously, if there were multiple service account token secrets in the WMCO namespace, scaling Windows nodes would fail. With this fix, the WMCO uses only the secret it creates, ignoring any other service account token secrets in the WMCO namespace. As a result, Windows nodes scale properly. ( OCPBUGS-37481 ) Previously, if reverse DNS lookup failed due to an error, such as the reverse DNS lookup services being unavailable, the WMCO would not fall back to using the VM hostname to determine if a certificate signing requests (CSR) should be approved. As a consequence, Bring-Your-Own-Host (BYOH) Windows nodes configured with an IP address would not become available. With this fix, BYOH nodes are properly added if reverse DNS is not available. ( OCPBUGS-36643 ) 2.3. Windows Machine Config Operator prerequisites The following information details the supported platform versions, Windows Server versions, and networking configurations for the Windows Machine Config Operator. See the vSphere documentation for any information that is relevant to only that platform. 2.3.1. WMCO supported installation method The WMCO fully supports installing Windows nodes into installer-provisioned infrastructure (IPI) clusters. This is the preferred OpenShift Container Platform installation method. For user-provisioned infrastructure (UPI) clusters, the WMCO supports installing Windows nodes only into a UPI cluster installed with the platform: none field set in the install-config.yaml file (bare-metal or provider-agnostic) and only for the BYOH (Bring Your Own Host) use case. UPI is not supported for any other platform. 2.3.2. WMCO 10.16.0 supported platforms and Windows Server versions The following table lists the Windows Server versions that are supported by WMCO 10.16.0, based on the applicable platform. Windows Server versions not listed are not supported and attempting to use them will cause errors. To prevent these errors, use only an appropriate version for your platform. Platform Supported Windows Server version Amazon Web Services (AWS) Windows Server 2022, OS Build 20348.681 or later [1] Windows Server 2019, version 1809 Microsoft Azure Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 VMware vSphere Windows Server 2022, OS Build 20348.681 or later Google Cloud Platform (GCP) Windows Server 2022, OS Build 20348.681 or later Nutanix Windows Server 2022, OS Build 20348.681 or later Bare metal or provider agnostic Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 For disconnected clusters, the Windows AMI must have the EC2LaunchV2 agent version 2.0.1643 or later installed. For more information, see the Install the latest version of EC2Launch v2 in the AWS documentation. 2.3.3. Supported networking Hybrid networking with OVN-Kubernetes is the only supported networking configuration. See the additional resources below for more information on this functionality. The following tables outline the type of networking configuration and Windows Server versions to use based on your platform. You must specify the network configuration when you install the cluster. Note The WMCO does not support OVN-Kubernetes without hybrid networking or OpenShift SDN. Dual NIC is not supported on WMCO-managed Windows instances. Table 2.1. Platform networking support Platform Supported networking Amazon Web Services (AWS) Hybrid networking with OVN-Kubernetes Microsoft Azure Hybrid networking with OVN-Kubernetes VMware vSphere Hybrid networking with OVN-Kubernetes with a custom VXLAN port Google Cloud Platform (GCP) Hybrid networking with OVN-Kubernetes Nutanix Hybrid networking with OVN-Kubernetes Bare metal or provider agnostic Hybrid networking with OVN-Kubernetes Table 2.2. Hybrid OVN-Kubernetes Windows Server support Hybrid networking with OVN-Kubernetes Supported Windows Server version Default VXLAN port Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 Custom VXLAN port Windows Server 2022, OS Build 20348.681 or later Additional resources Hybrid networking 2.4. Windows Machine Config Operator known limitations Note the following limitations when working with Windows nodes managed by the WMCO (Windows nodes): The following OpenShift Container Platform features are not supported on Windows nodes: Image builds OpenShift Pipelines OpenShift Service Mesh OpenShift monitoring of user-defined projects OpenShift Serverless Horizontal Pod Autoscaling Vertical Pod Autoscaling The following Red Hat features are not supported on Windows nodes: Red Hat Insights cost management Red Hat OpenShift Local Dual NIC is not supported on WMCO-managed Windows instances. Windows nodes do not support workloads created by using deployment configs. You can use a deployment or other method to deploy workloads. Red Hat OpenShift support for Windows Containers does not support adding Windows nodes to a cluster through a trunk port. The only supported networking configuration for adding Windows nodes is through an access port that carries traffic for the VLAN. Red Hat OpenShift support for Windows Containers does not support any Windows operating system language other than English (United States). Due to a limitation within the Windows operating system, clusterNetwork CIDR addresses of class E, such as 240.0.0.0 , are not compatible with Windows nodes. Kubernetes has identified the following node feature limitations : Huge pages are not supported for Windows containers. Privileged containers are not supported for Windows containers. Kubernetes has identified several API compatibility issues .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/windows_container_support_for_openshift/release-notes
2.4. perf
2.4. perf The perf tool uses hardware performance counters and kernel tracepoints to track the impact of other commands and applications on your system. Various perf subcommands display and record statistics for common performance events, and analyze and report on the data recorded. For detailed information about perf and its subcommands, see Section A.6, "perf" . Alternatively, more information is available in the Red Hat Enterprise Linux 7 Developer Guide .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-performance_monitoring_tools-perf
8.2.6.2. Testing Backups
8.2.6.2. Testing Backups Every type of backup should be tested on a periodic basis to make sure that data can be read from it. It is a fact that sometimes backups are performed that are, for one reason or another, unreadable. The unfortunate part in all this is that many times it is not realized until data has been lost and must be restored from backup. The reasons for this can range from changes in tape drive head alignment, misconfigured backup software, and operator error. No matter what the cause, without periodic testing you cannot be sure that you are actually generating backups from which data can be restored at some later time.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s3-disaster-backups-restore-testing
4.6.3. Deleting a Fence Device
4.6.3. Deleting a Fence Device Note Fence devices that are in use cannot be deleted. To delete a fence device that a node is currently using, first update the node fence configuration for any node using the device and then delete the device. To delete a fence device, follow these steps: From the Fence Devices configuration page, check the box to the left of the fence device or devices to select the devices to delete. Click Delete and wait for the configuration to be updated. A message appears indicating which devices are being deleted. When the configuration has been updated, the deleted fence device no longer appears in the display.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s2-delete-fence-devices-conga-ca
8.5. Snapshot Previews
8.5. Snapshot Previews To select which snapshot a virtual disk will be reverted to, the administrator can preview all previously created snapshots. From the available snapshots per guest, the administrator can select a snapshot volume to preview its contents. As depicted in Preview Snapshot , each snapshot is saved as a COW volume, and when it is previewed, a new preview layer is copied from the snapshot being previewed. The guest interacts with the preview instead of the actual snapshot volume. After the administrator previews the selected snapshot, the preview can be committed to restore the guest data to the state captured in the snapshot. If the administrator commits the preview, the guest is attached to the preview layer. After a snapshot is previewed, the administrator can select Undo to discard the preview layer of the viewed snapshot. The layer that contains the snapshot itself is preserved despite the preview layer being discarded. Figure 8.3. Preview Snapshot
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/snapshot_previews
B.7. Bmap Tracepoints
B.7. Bmap Tracepoints Block mapping is a task central to any file system. GFS2 uses a traditional bitmap-based system with two bits per block. The main purpose of the tracepoints in this subsystem is to allow monitoring of the time taken to allocate and map blocks. The gfs2_bmap tracepoint is called twice for each bmap operation: once at the start to display the bmap request, and once at the end to display the result. This makes it easy to match the requests and results together and measure the time taken to map blocks in different parts of the file system, different file offsets, or even of different files. It is also possible to see what the average extent sizes being returned are in comparison to those being requested. The gfs2_rs tracepoint traces block reservations as they are created, used, and destroyed in the block allocator. To keep track of allocated blocks, gfs2_block_alloc is called not only on allocations, but also on freeing of blocks. Since the allocations are all referenced according to the inode for which the block is intended, this can be used to track which physical blocks belong to which files in a live file system. This is particularly useful when combined with blktrace , which will show problematic I/O patterns that may then be referred back to the relevant inodes using the mapping gained by means this tracepoint.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/global_file_system_2/ap-bmap-tracepoints-gfs2
3.3.3. Physical Host Devices
3.3.3. Physical Host Devices Certain hardware platforms allow virtual machines to directly access various hardware devices and components. This process in virtualization is known as device assignment , or also as passthrough . PCI device assignment The KVM hypervisor supports attaching PCI devices on the host system to virtual machines. PCI device assignment provides guests with exclusive access to PCI devices for a range of tasks. It enables PCI devices to appear and behave as if they were physically attached to the guest virtual machine. Device assignment is supported on PCI Express devices, with the exception of graphics cards. Parallel PCI devices may be supported as assigned devices, but they have severe limitations due to security and system configuration conflicts. Note For more information on device assignment, refer to the Red Hat Enterprise Linux 6 Virtualization Host Configuration and Guest Installation Guide . USB passthrough The KVM hypervisor supports attaching USB devices on the host system to virtual machines. USB device assignment makes it possible for guests to have exclusive access to USB devices for a range of tasks. It also enables USB devices to appear and behave as if they were physically attached to the virtual machine. Note For more information on USB passthrough, refer to the Red Hat Enterprise Linux 6 Virtualization Administration Guide . SR-IOV SR-IOV (Single Root I/O Virtualization) is a PCI Express standard that extends a single physical PCI function to share its PCI resources as separate, virtual functions (VFs). Each function is capable of being used by a different virtual machine through PCI device assignment. An SR-IOV-capable PCI-e device, provides a Single Root Function (for example, a single Ethernet port) and presents multiple, separate virtual devices as unique PCI device functions. Each virtual device may have its own unique PCI configuration space, memory-mapped registers, and individual MSI-based interrupts. Note For more information on SR-IOV, refer to the Red Hat Enterprise Linux 6 Virtualization Host Configuration and Guest Installation Guide . NPIV N_Port ID Virtualization (NPIV) is a functionality available with some Fibre Channel devices. NPIV shares a single physical N_Port as multiple N_Port IDs. NPIV provides similar functionality for Fibre Channel Host Bus Adapters (HBAs) that SR-IOV provides for PCIe interfaces. With NPIV, virtual machines can be provided with a virtual Fibre Channel initiator to Storage Area Networks (SANs). NPIV can provide high density virtualized environments with enterprise-level storage solutions. Note For more information on NPIV, refer to the Red Hat Enterprise Linux 6 Virtualization Administration Guide .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_getting_started_guide/sec-host
Chapter 12. Installing a cluster on GCP in a restricted network with user-provisioned infrastructure
Chapter 12. Installing a cluster on GCP in a restricted network with user-provisioned infrastructure In OpenShift Container Platform version 4.15, you can install a cluster on Google Cloud Platform (GCP) that uses infrastructure that you provide and an internal mirror of the installation release content. Important While you can install an OpenShift Container Platform cluster by using mirrored installation release content, your cluster still requires internet access to use the GCP APIs. The steps for performing a user-provided infrastructure install are outlined here. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 12.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. If you use a firewall, you configured it to allow the sites that your cluster requires access to. While you might need to grant access to more sites, you must grant access to *.googleapis.com and accounts.google.com . If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain long-term credentials . 12.2. About installations in restricted networks In OpenShift Container Platform 4.15, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 12.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 12.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 12.4. Configuring your GCP project Before you can install OpenShift Container Platform, you must configure a Google Cloud Platform (GCP) project to host it. 12.4.1. Creating a GCP project To install OpenShift Container Platform, you must create a project in your Google Cloud Platform (GCP) account to host the cluster. Procedure Create a project to host your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation. Important Your GCP project must use the Premium Network Service Tier if you are using installer-provisioned infrastructure. The Standard Network Service Tier is not supported for clusters installed using the installation program. The installation program configures internal load balancing for the api-int.<cluster_name>.<base_domain> URL; the Premium Tier is required for internal load balancing. 12.4.2. Enabling API services in GCP Your Google Cloud Platform (GCP) project requires access to several API services to complete OpenShift Container Platform installation. Prerequisites You created a project to host your cluster. Procedure Enable the following required API services in the project that hosts your cluster. You may also enable optional API services which are not required for installation. See Enabling services in the GCP documentation. Table 12.1. Required API services API service Console service name Compute Engine API compute.googleapis.com Cloud Resource Manager API cloudresourcemanager.googleapis.com Google DNS API dns.googleapis.com IAM Service Account Credentials API iamcredentials.googleapis.com Identity and Access Management (IAM) API iam.googleapis.com Service Usage API serviceusage.googleapis.com Table 12.2. Optional API services API service Console service name Google Cloud APIs cloudapis.googleapis.com Service Management API servicemanagement.googleapis.com Google Cloud Storage JSON API storage-api.googleapis.com Cloud Storage storage-component.googleapis.com 12.4.3. Configuring DNS for GCP To install OpenShift Container Platform, the Google Cloud Platform (GCP) account you use must have a dedicated public hosted zone in the same project that you host the OpenShift Container Platform cluster. This zone must be authoritative for the domain. The DNS service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through GCP or another source. Note If you purchase a new domain, it can take time for the relevant DNS changes to propagate. For more information about purchasing domains through Google, see Google Domains . Create a public hosted zone for your domain or subdomain in your GCP project. See Creating public zones in the GCP documentation. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . Extract the new authoritative name servers from the hosted zone records. See Look up your Cloud DNS name servers in the GCP documentation. You typically have four name servers. Update the registrar records for the name servers that your domain uses. For example, if you registered your domain to Google Domains, see the following topic in the Google Domains Help: How to switch to custom name servers . If you migrated your root domain to Google Cloud DNS, migrate your DNS records. See Migrating to Cloud DNS in the GCP documentation. If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. This process might include a request to your company's IT department or the division that controls the root domain and DNS services for your company. 12.4.4. GCP account limits The OpenShift Container Platform cluster uses a number of Google Cloud Platform (GCP) components, but the default Quotas do not affect your ability to install a default OpenShift Container Platform cluster. A default cluster, which contains three compute and three control plane machines, uses the following resources. Note that some resources are required only during the bootstrap process and are removed after the cluster deploys. Table 12.3. GCP resources used in a default cluster Service Component Location Total resources required Resources removed after bootstrap Service account IAM Global 6 1 Firewall rules Networking Global 11 1 Forwarding rules Compute Global 2 0 Health checks Compute Global 2 0 Images Compute Global 1 0 Networks Networking Global 1 0 Routers Networking Global 1 0 Routes Networking Global 2 0 Subnetworks Compute Global 2 0 Target pools Networking Global 2 0 Note If any of the quotas are insufficient during installation, the installation program displays an error that states both which quota was exceeded and the region. Be sure to consider your actual cluster size, planned cluster growth, and any usage from other clusters that are associated with your account. The CPU, static IP addresses, and persistent disk SSD (storage) quotas are the ones that are most likely to be insufficient. If you plan to deploy your cluster in one of the following regions, you will exceed the maximum storage quota and are likely to exceed the CPU quota limit: asia-east2 asia-northeast2 asia-south1 australia-southeast1 europe-north1 europe-west2 europe-west3 europe-west6 northamerica-northeast1 southamerica-east1 us-west2 You can increase resource quotas from the GCP console , but you might need to file a support ticket. Be sure to plan your cluster size early so that you can allow time to resolve the support ticket before you install your OpenShift Container Platform cluster. 12.4.5. Creating a service account in GCP OpenShift Container Platform requires a Google Cloud Platform (GCP) service account that provides authentication and authorization to access data in the Google APIs. If you do not have an existing IAM service account that contains the required roles in your project, you must create one. Prerequisites You created a project to host your cluster. Procedure Create a service account in the project that you use to host your OpenShift Container Platform cluster. See Creating a service account in the GCP documentation. Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the Owner role to it. See Granting roles to a service account for specific resources . Note While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. You can create the service account key in JSON format, or attach the service account to a GCP virtual machine. See Creating service account keys and Creating and enabling service accounts for instances in the GCP documentation. Note If you use a virtual machine with an attached service account to create your cluster, you must set credentialsMode: Manual in the install-config.yaml file before installation. 12.4.6. Required GCP roles When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If your organization's security policies require a more restrictive set of permissions, you can create a service account with the following permissions. If you deploy your cluster into an existing virtual private cloud (VPC), the service account does not require certain networking permissions, which are noted in the following lists: Required roles for the installation program Compute Admin Role Administrator Security Admin Service Account Admin Service Account Key Admin Service Account User Storage Admin Required roles for creating network resources during installation DNS Administrator Required roles for using the Cloud Credential Operator in passthrough mode Compute Load Balancer Admin Required roles for user-provisioned GCP infrastructure Deployment Manager Editor The following roles are applied to the service accounts that the control plane and compute machines use: Table 12.4. GCP service account roles Account Roles Control Plane roles/compute.instanceAdmin roles/compute.networkAdmin roles/compute.securityAdmin roles/storage.admin roles/iam.serviceAccountUser Compute roles/compute.viewer roles/storage.admin 12.4.7. Required GCP permissions for user-provisioned infrastructure When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If your organization's security policies require a more restrictive set of permissions, you can create custom roles with the necessary permissions. The following permissions are required for the user-provisioned infrastructure for creating and deleting the OpenShift Container Platform cluster. Example 12.1. Required permissions for creating network resources compute.addresses.create compute.addresses.createInternal compute.addresses.delete compute.addresses.get compute.addresses.list compute.addresses.use compute.addresses.useInternal compute.firewalls.create compute.firewalls.delete compute.firewalls.get compute.firewalls.list compute.forwardingRules.create compute.forwardingRules.get compute.forwardingRules.list compute.forwardingRules.setLabels compute.networks.create compute.networks.get compute.networks.list compute.networks.updatePolicy compute.routers.create compute.routers.get compute.routers.list compute.routers.update compute.routes.list compute.subnetworks.create compute.subnetworks.get compute.subnetworks.list compute.subnetworks.use compute.subnetworks.useExternalIp Example 12.2. Required permissions for creating load balancer resources compute.regionBackendServices.create compute.regionBackendServices.get compute.regionBackendServices.list compute.regionBackendServices.update compute.regionBackendServices.use compute.targetPools.addInstance compute.targetPools.create compute.targetPools.get compute.targetPools.list compute.targetPools.removeInstance compute.targetPools.use Example 12.3. Required permissions for creating DNS resources dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.list dns.resourceRecordSets.update Example 12.4. Required permissions for creating Service Account resources iam.serviceAccountKeys.create iam.serviceAccountKeys.delete iam.serviceAccountKeys.get iam.serviceAccountKeys.list iam.serviceAccounts.actAs iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 12.5. Required permissions for creating compute resources compute.disks.create compute.disks.get compute.disks.list compute.instanceGroups.create compute.instanceGroups.delete compute.instanceGroups.get compute.instanceGroups.list compute.instanceGroups.update compute.instanceGroups.use compute.instances.create compute.instances.delete compute.instances.get compute.instances.list compute.instances.setLabels compute.instances.setMetadata compute.instances.setServiceAccount compute.instances.setTags compute.instances.use compute.machineTypes.get compute.machineTypes.list Example 12.6. Required for creating storage resources storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.list storage.objects.create storage.objects.delete storage.objects.get storage.objects.list Example 12.7. Required permissions for creating health check resources compute.healthChecks.create compute.healthChecks.get compute.healthChecks.list compute.healthChecks.useReadOnly compute.httpHealthChecks.create compute.httpHealthChecks.get compute.httpHealthChecks.list compute.httpHealthChecks.useReadOnly Example 12.8. Required permissions to get GCP zone and region related information compute.globalOperations.get compute.regionOperations.get compute.regions.list compute.zoneOperations.get compute.zones.get compute.zones.list Example 12.9. Required permissions for checking services and quotas monitoring.timeSeries.list serviceusage.quotas.get serviceusage.services.list Example 12.10. Required IAM permissions for installation iam.roles.get Example 12.11. Required permissions when authenticating without a service account key iam.serviceAccounts.signBlob Example 12.12. Required Images permissions for installation compute.images.create compute.images.delete compute.images.get compute.images.list Example 12.13. Optional permission for running gather bootstrap compute.instances.getSerialPortOutput Example 12.14. Required permissions for deleting network resources compute.addresses.delete compute.addresses.deleteInternal compute.addresses.list compute.firewalls.delete compute.firewalls.list compute.forwardingRules.delete compute.forwardingRules.list compute.networks.delete compute.networks.list compute.networks.updatePolicy compute.routers.delete compute.routers.list compute.routes.list compute.subnetworks.delete compute.subnetworks.list Example 12.15. Required permissions for deleting load balancer resources compute.regionBackendServices.delete compute.regionBackendServices.list compute.targetPools.delete compute.targetPools.list Example 12.16. Required permissions for deleting DNS resources dns.changes.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.resourceRecordSets.delete dns.resourceRecordSets.list Example 12.17. Required permissions for deleting Service Account resources iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 12.18. Required permissions for deleting compute resources compute.disks.delete compute.disks.list compute.instanceGroups.delete compute.instanceGroups.list compute.instances.delete compute.instances.list compute.instances.stop compute.machineTypes.list Example 12.19. Required for deleting storage resources storage.buckets.delete storage.buckets.getIamPolicy storage.buckets.list storage.objects.delete storage.objects.list Example 12.20. Required permissions for deleting health check resources compute.healthChecks.delete compute.healthChecks.list compute.httpHealthChecks.delete compute.httpHealthChecks.list Example 12.21. Required Images permissions for deletion compute.images.delete compute.images.list Example 12.22. Required permissions to get Region related information compute.regions.get Example 12.23. Required Deployment Manager permissions deploymentmanager.deployments.create deploymentmanager.deployments.delete deploymentmanager.deployments.get deploymentmanager.deployments.list deploymentmanager.manifests.get deploymentmanager.operations.get deploymentmanager.resources.list Additional resources Optimizing storage 12.4.8. Supported GCP regions You can deploy an OpenShift Container Platform cluster to the following Google Cloud Platform (GCP) regions: asia-east1 (Changhua County, Taiwan) asia-east2 (Hong Kong) asia-northeast1 (Tokyo, Japan) asia-northeast2 (Osaka, Japan) asia-northeast3 (Seoul, South Korea) asia-south1 (Mumbai, India) asia-south2 (Delhi, India) asia-southeast1 (Jurong West, Singapore) asia-southeast2 (Jakarta, Indonesia) australia-southeast1 (Sydney, Australia) australia-southeast2 (Melbourne, Australia) europe-central2 (Warsaw, Poland) europe-north1 (Hamina, Finland) europe-southwest1 (Madrid, Spain) europe-west1 (St. Ghislain, Belgium) europe-west2 (London, England, UK) europe-west3 (Frankfurt, Germany) europe-west4 (Eemshaven, Netherlands) europe-west6 (Zurich, Switzerland) europe-west8 (Milan, Italy) europe-west9 (Paris, France) europe-west12 (Turin, Italy) me-central1 (Doha, Qatar, Middle East) me-west1 (Tel Aviv, Israel) northamerica-northeast1 (Montreal, Quebec, Canada) northamerica-northeast2 (Toronto, Ontario, Canada) southamerica-east1 (Sao Paulo, Brazil) southamerica-west1 (Santiago, Chile) us-central1 (Council Bluffs, Iowa, USA) us-east1 (Moncks Corner, South Carolina, USA) us-east4 (Ashburn, Northern Virginia, USA) us-east5 (Columbus, Ohio) us-south1 (Dallas, Texas) us-west1 (The Dalles, Oregon, USA) us-west2 (Los Angeles, California, USA) us-west3 (Salt Lake City, Utah, USA) us-west4 (Las Vegas, Nevada, USA) Note To determine which machine type instances are available by region and zone, see the Google documentation . 12.4.9. Installing and configuring CLI tools for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must install and configure the CLI tools for GCP. Prerequisites You created a project to host your cluster. You created a service account and granted it the required permissions. Procedure Install the following binaries in USDPATH : gcloud gsutil See Install the latest Cloud SDK version in the GCP documentation. Authenticate using the gcloud tool with your configured service account. See Authorizing with a service account in the GCP documentation. 12.5. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 12.5.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 12.5. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 12.5.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 12.6. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 12.5.3. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 12.24. Machine series C2 C2D C3 E2 M1 N1 N2 N2D Tau T2D 12.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . 12.6. Creating the installation files for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 12.6.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.15.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 12.6.2. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. Configure a GCP account. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the network and subnets for the VPC to install the cluster in under the parent platform.gcp field: network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet> For platform.gcp.network , specify the name for the existing Google VPC. For platform.gcp.controlPlaneSubnet and platform.gcp.computeSubnet , specify the existing subnets to deploy the control plane machines and compute machines, respectively. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Make any other modifications to the install-config.yaml file that you require. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for GCP 12.6.3. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 12.6.4. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 12.6.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 12.6.6. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Optional: If you do not want the cluster to provision compute machines, remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Important If you disabled the MachineAPI capability when installing a cluster on user-provisioned infrastructure, you must remove the Kubernetes manifest files that define the worker machines. Otherwise, your cluster fails to install. Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Additional resources Optional: Adding the ingress DNS records 12.7. Exporting common variables 12.7.1. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Google Cloud Platform (GCP). The infrastructure name is also used to locate the appropriate GCP resources during an OpenShift Container Platform installation. The provided Deployment Manager templates contain references to this infrastructure name, so you must extract it. Prerequisites You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 12.7.2. Exporting common variables for Deployment Manager templates You must export a common set of variables that are used with the provided Deployment Manager templates used to assist in completing a user-provided infrastructure install on Google Cloud Platform (GCP). Note Specific Deployment Manager templates can also require additional exported variables, which are detailed in their related procedures. Procedure Export the following common variables to be used by the provided Deployment Manager templates: USD export BASE_DOMAIN='<base_domain>' USD export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' USD export NETWORK_CIDR='10.0.0.0/16' USD export MASTER_SUBNET_CIDR='10.0.0.0/17' USD export WORKER_SUBNET_CIDR='10.0.128.0/17' USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 USD export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` USD export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` USD export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` USD export REGION=`jq -r .gcp.region <installation_directory>/metadata.json` 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 12.8. Creating a VPC in GCP You must create a VPC in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements. One way to create the VPC is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You have defined the variables in the Exporting common variables section. Procedure Copy the template from the Deployment Manager template for the VPC section of this topic and save it as 01_vpc.py on your computer. This template describes the VPC that your cluster requires. Create a 01_vpc.yaml resource definition file: USD cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 region is the region to deploy the cluster into, for example us-central1 . 3 master_subnet_cidr is the CIDR for the master subnet, for example 10.0.0.0/17 . 4 worker_subnet_cidr is the CIDR for the worker subnet, for example 10.0.128.0/17 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-vpc --config 01_vpc.yaml 12.8.1. Deployment Manager template for the VPC You can use the following Deployment Manager template to deploy the VPC that you need for your OpenShift Container Platform cluster: Example 12.25. 01_vpc.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources} 12.9. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. 12.9.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 12.9.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Table 12.7. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 12.8. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 12.9. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 12.10. Creating load balancers in GCP You must configure load balancers in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You have defined the variables in the Exporting common variables section. Procedure Copy the template from the Deployment Manager template for the internal load balancer section of this topic and save it as 02_lb_int.py on your computer. This template describes the internal load balancing objects that your cluster requires. For an external cluster, also copy the template from the Deployment Manager template for the external load balancer section of this topic and save it as 02_lb_ext.py on your computer. This template describes the external load balancing objects that your cluster requires. Export the variables that the deployment template uses: Export the cluster network location: USD export CLUSTER_NETWORK=(`gcloud compute networks describe USD{INFRA_ID}-network --format json | jq -r .selfLink`) Export the control plane subnet location: USD export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-master-subnet --region=USD{REGION} --format json | jq -r .selfLink`) Export the three zones that the cluster uses: USD export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d "/" -f9`) USD export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d "/" -f9`) USD export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d "/" -f9`) Create a 02_infra.yaml resource definition file: USD cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF 1 2 Required only when deploying an external cluster. 3 infra_id is the INFRA_ID infrastructure name from the extraction step. 4 region is the region to deploy the cluster into, for example us-central1 . 5 control_subnet is the URI to the control subnet. 6 zones are the zones to deploy the control plane instances into, like us-east1-b , us-east1-c , and us-east1-d . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml Export the cluster IP address: USD export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`) For an external cluster, also export the cluster public IP address: USD export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`) 12.10.1. Deployment Manager template for the external load balancer You can use the following Deployment Manager template to deploy the external load balancer that you need for your OpenShift Container Platform cluster: Example 12.26. 02_lb_ext.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources} 12.10.2. Deployment Manager template for the internal load balancer You can use the following Deployment Manager template to deploy the internal load balancer that you need for your OpenShift Container Platform cluster: Example 12.27. 02_lb_int.py Deployment Manager template def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': "HTTPS" } }, { 'name': context.properties['infra_id'] + '-api-internal-backend-service', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources} You will need this template in addition to the 02_lb_ext.py template when you create an external cluster. 12.11. Creating a private DNS zone in GCP You must configure a private DNS zone in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create this component is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Procedure Copy the template from the Deployment Manager template for the private DNS section of this topic and save it as 02_dns.py on your computer. This template describes the private DNS objects that your cluster requires. Create a 02_dns.yaml resource definition file: USD cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 cluster_domain is the domain for the cluster, for example openshift.example.com . 3 cluster_network is the selfLink URL to the cluster network. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml The templates do not create DNS entries due to limitations of Deployment Manager, so you must create them manually: Add the internal DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone For an external cluster, also add the external DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} 12.11.1. Deployment Manager template for the private DNS You can use the following Deployment Manager template to deploy the private DNS that you need for your OpenShift Container Platform cluster: Example 12.28. 02_dns.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources} 12.12. Creating firewall rules in GCP You must create firewall rules in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Procedure Copy the template from the Deployment Manager template for firewall rules section of this topic and save it as 03_firewall.py on your computer. This template describes the security groups that your cluster requires. Create a 03_firewall.yaml resource definition file: USD cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF 1 allowed_external_cidr is the CIDR range that can access the cluster API and SSH to the bootstrap host. For an internal cluster, set this value to USD{NETWORK_CIDR} . 2 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 cluster_network is the selfLink URL to the cluster network. 4 network_cidr is the CIDR of the VPC network, for example 10.0.0.0/16 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml 12.12.1. Deployment Manager template for firewall rules You can use the following Deployment Manager template to deploy the firewall rues that you need for your OpenShift Container Platform cluster: Example 12.29. 03_firewall.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources} 12.13. Creating IAM roles in GCP You must create IAM roles in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You have defined the variables in the Exporting common variables section. Procedure Copy the template from the Deployment Manager template for IAM roles section of this topic and save it as 03_iam.py on your computer. This template describes the IAM roles that your cluster requires. Create a 03_iam.yaml resource definition file: USD cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml Export the variable for the master service account: USD export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the variable for the worker service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the variable for the subnet that hosts the compute machines: USD export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`) The templates do not create the policy bindings due to limitations of Deployment Manager, so you must create them manually: USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.instanceAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.securityAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/iam.serviceAccountUser" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/storage.admin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/compute.viewer" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/storage.admin" Create a service account key and store it locally for later use: USD gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT} 12.13.1. Deployment Manager template for IAM roles You can use the following Deployment Manager template to deploy the IAM roles that you need for your OpenShift Container Platform cluster: Example 12.30. 03_iam.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources} 12.14. Creating the RHCOS cluster image for the GCP infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Google Cloud Platform (GCP) for your OpenShift Container Platform nodes. Procedure Obtain the RHCOS image from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The file name contains the OpenShift Container Platform version number in the format rhcos-<version>-<arch>-gcp.<arch>.tar.gz . Create the Google storage bucket: USD gsutil mb gs://<bucket_name> Upload the RHCOS image to the Google storage bucket: USD gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name> Export the uploaded RHCOS image location as a variable: USD export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz Create the cluster image: USD gcloud compute images create "USD{INFRA_ID}-rhcos-image" \ --source-uri="USD{IMAGE_SOURCE}" 12.15. Creating the bootstrap machine in GCP You must create the bootstrap machine in Google Cloud Platform (GCP) to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Ensure you installed pyOpenSSL. Procedure Copy the template from the Deployment Manager template for the bootstrap machine section of this topic and save it as 04_bootstrap.py on your computer. This template describes the bootstrap machine that your cluster requires. Export the location of the Red Hat Enterprise Linux CoreOS (RHCOS) image that the installation program requires: USD export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`) Create a bucket and upload the bootstrap.ign file: USD gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition USD gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/ Create a signed URL for the bootstrap instance to use to access the Ignition config. Export the URL from the output as a variable: USD export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep "^gs:" | awk '{print USD5}'` Create a 04_bootstrap.yaml resource definition file: USD cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 region is the region to deploy the cluster into, for example us-central1 . 3 zone is the zone to deploy the bootstrap instance into, for example us-central1-b . 4 cluster_network is the selfLink URL to the cluster network. 5 control_subnet is the selfLink URL to the control subnet. 6 image is the selfLink URL to the RHCOS image. 7 machine_type is the machine type of the instance, for example n1-standard-4 . 8 root_volume_size is the boot disk size for the bootstrap machine. 9 bootstrap_ign is the URL output when creating a signed URL. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the bootstrap machine manually. Add the bootstrap instance to the internal load balancer instance group: USD gcloud compute instance-groups unmanaged add-instances \ USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap Add the bootstrap instance group to the internal load balancer backend service: USD gcloud compute backend-services add-backend \ USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} 12.15.1. Deployment Manager template for the bootstrap machine You can use the following Deployment Manager template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 12.31. 04_bootstrap.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{"ignition":{"config":{"replace":{"source":"' + context.properties['bootstrap_ign'] + '"}},"version":"3.2.0"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources} 12.16. Creating the control plane machines in GCP You must create the control plane machines in Google Cloud Platform (GCP) for your cluster to use. One way to create these machines is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables , Creating load balancers in GCP , Creating IAM roles in GCP , and Creating the bootstrap machine in GCP sections. Create the bootstrap machine. Procedure Copy the template from the Deployment Manager template for control plane machines section of this topic and save it as 05_control_plane.py on your computer. This template describes the control plane machines that your cluster requires. Export the following variable required by the resource definition: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign` Create a 05_control_plane.yaml resource definition file: USD cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 zones are the zones to deploy the control plane instances into, for example us-central1-a , us-central1-b , and us-central1-c . 3 control_subnet is the selfLink URL to the control subnet. 4 image is the selfLink URL to the RHCOS image. 5 machine_type is the machine type of the instance, for example n1-standard-4 . 6 service_account_email is the email address for the master service account that you created. 7 ignition is the contents of the master.ign file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the control plane machines manually. Run the following commands to add the control plane machines to the appropriate instance groups: USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2 For an external cluster, you must also run the following commands to add the control plane machines to the target pools: USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_0}" --instances=USD{INFRA_ID}-master-0 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_1}" --instances=USD{INFRA_ID}-master-1 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_2}" --instances=USD{INFRA_ID}-master-2 12.16.1. Deployment Manager template for control plane machines You can use the following Deployment Manager template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 12.32. 05_control_plane.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources} 12.17. Wait for bootstrap completion and remove bootstrap resources in GCP After you create all of the required infrastructure in Google Cloud Platform (GCP), wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} USD gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign USD gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition USD gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap 12.18. Creating additional worker machines in GCP You can create worker machines in Google Cloud Platform (GCP) for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. In this example, you manually launch one instance by using the Deployment Manager template. Additional instances can be launched by including additional resources of type 06_worker.py in the file. Note If you do not use the provided Deployment Manager template to create your worker machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables , Creating load balancers in GCP , and Creating the bootstrap machine in GCP sections. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the Deployment Manager template for worker machines section of this topic and save it as 06_worker.py on your computer. This template describes the worker machines that your cluster requires. Export the variables that the resource definition uses. Export the subnet that hosts the compute machines: USD export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`) Export the email address for your service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the location of the compute machine Ignition config file: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign` Create a 06_worker.yaml resource definition file: USD cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF 1 name is the name of the worker machine, for example worker-0 . 2 9 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 10 zone is the zone to deploy the worker machine into, for example us-central1-a . 4 11 compute_subnet is the selfLink URL to the compute subnet. 5 12 image is the selfLink URL to the RHCOS image. 1 6 13 machine_type is the machine type of the instance, for example n1-standard-4 . 7 14 service_account_email is the email address for the worker service account that you created. 8 15 ignition is the contents of the worker.ign file. Optional: If you want to launch additional instances, include additional resources of type 06_worker.py in your 06_worker.yaml resource definition file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736 12.18.1. Deployment Manager template for worker machines You can use the following Deployment Manager template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 12.33. 06_worker.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources} 12.19. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You installed the oc CLI. Ensure the bootstrap process completed successfully. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 12.20. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 12.21. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 12.22. Optional: Adding the ingress DNS records If you removed the DNS zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at the ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites Ensure you defined the variables in the Exporting common variables section. Remove the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs. Ensure the bootstrap process completed successfully. Procedure Wait for the Ingress router to create a load balancer and populate the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98 Add the A record to your zones: To use A records: Export the variable for the router IP address: USD export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add the A record to the private zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone For an external cluster, also add the A record to the public zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} To add explicit domains instead of using a wildcard, create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com 12.23. Completing a GCP installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Google Cloud Platform (GCP) user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Ensure the bootstrap process completed successfully. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Observe the running state of your cluster. Run the following command to view the current cluster version and status: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete Run the following command to view the Operators managed on the control plane by the Cluster Version Operator (CVO): USD oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m Run the following command to view your cluster pods: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m ... openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m When the current cluster version is AVAILABLE , the installation is complete. 12.24. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 12.25. steps Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster
[ "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.15.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install create install-config --dir <installation_directory> 1", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet>", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "controlPlane: platform: gcp: secureBoot: Enabled", "compute: - platform: gcp: secureBoot: Enabled", "platform: gcp: defaultMachinePlatform: secureBoot: Enabled", "controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3", "compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "export BASE_DOMAIN='<base_domain>' export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' export NETWORK_CIDR='10.0.0.0/16' export MASTER_SUBNET_CIDR='10.0.0.0/17' export WORKER_SUBNET_CIDR='10.0.128.0/17' export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` export REGION=`jq -r .gcp.region <installation_directory>/metadata.json`", "cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-vpc --config 01_vpc.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources}", "export CLUSTER_NETWORK=(`gcloud compute networks describe USD{INFRA_ID}-network --format json | jq -r .selfLink`)", "export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-master-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d \"/\" -f9`)", "export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d \"/\" -f9`)", "export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d \"/\" -f9`)", "cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml", "export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`)", "export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`)", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources}", "def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': \"HTTPS\" } }, { 'name': context.properties['infra_id'] + '-api-internal-backend-service', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources}", "cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources}", "cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources}", "cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml", "export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.instanceAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.securityAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/iam.serviceAccountUser\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.viewer\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\"", "gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources}", "gsutil mb gs://<bucket_name>", "gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name>", "export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz", "gcloud compute images create \"USD{INFRA_ID}-rhcos-image\" --source-uri=\"USD{IMAGE_SOURCE}\"", "export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`)", "gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition", "gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/", "export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep \"^gs:\" | awk '{print USD5}'`", "cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap", "gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"' + context.properties['bootstrap_ign'] + '\"}},\"version\":\"3.2.0\"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources}", "export MASTER_IGNITION=`cat <installation_directory>/master.ign`", "cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_0}\" --instances=USD{INFRA_ID}-master-0", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_1}\" --instances=USD{INFRA_ID}-master-1", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_2}\" --instances=USD{INFRA_ID}-master-2", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources}", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2", "gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}", "gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign", "gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition", "gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap", "export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_IGNITION=`cat <installation_directory>/worker.ign`", "cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources}", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98", "export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete", "oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_gcp/installing-restricted-networks-gcp
Chapter 18. Header
Chapter 18. Header Overview The header language provides a convenient way of accessing header values in the current message. When you supply a header name, the header language performs a case-insensitive lookup and returns the corresponding header value. The header language is part of camel-core . XML example For example, to resequence incoming exchanges according to the value of a SequenceNumber header (where the sequence number must be a positive integer), you can define a route as follows: Java example The same route can be defined in Java, as follows:
[ "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\" SourceURL \"/> <resequence> <language language=\"header\">SequenceNumber</language> </resequence> <to uri=\" TargetURL \"/> </route> </camelContext>", "from(\" SourceURL \") .resequence(header(\"SequenceNumber\")) .to(\" TargetURL \");" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/header
Chapter 5. PersistentVolume [v1]
Chapter 5. PersistentVolume [v1] Description PersistentVolume (PV) is a storage resource provisioned by an administrator. It is analogous to a node. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes Type object 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PersistentVolumeSpec is the specification of a persistent volume. status object PersistentVolumeStatus is the current status of a persistent volume. 5.1.1. .spec Description PersistentVolumeSpec is the specification of a persistent volume. Type object Property Type Description accessModes array (string) accessModes contains all ways the volume can be mounted. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes awsElasticBlockStore object Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. azureDisk object AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object AzureFile represents an Azure File Service mount on the host and bind mount to the pod. capacity object (Quantity) capacity is the description of the persistent volume's resources and capacity. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#capacity cephfs object Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. cinder object Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. claimRef object ObjectReference contains enough information to let you inspect or modify the referred object. csi object Represents storage that is managed by an external CSI volume driver (Beta feature) fc object Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. flexVolume object FlexPersistentVolumeSource represents a generic persistent volume resource that is provisioned/attached using an exec based plugin. flocker object Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. gcePersistentDisk object Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. glusterfs object Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling. hostPath object Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling. iscsi object ISCSIPersistentVolumeSource represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. local object Local represents directly-attached storage with node affinity (Beta feature) mountOptions array (string) mountOptions is the list of mount options, e.g. ["ro", "soft"]. Not validated - mount will simply fail if one is invalid. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options nfs object Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling. nodeAffinity object VolumeNodeAffinity defines constraints that limit what nodes this volume can be accessed from. persistentVolumeReclaimPolicy string persistentVolumeReclaimPolicy defines what happens to a persistent volume when released from its claim. Valid options are Retain (default for manually created PersistentVolumes), Delete (default for dynamically provisioned PersistentVolumes), and Recycle (deprecated). Recycle must be supported by the volume plugin underlying this PersistentVolume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#reclaiming Possible enum values: - "Delete" means the volume will be deleted from Kubernetes on release from its claim. The volume plugin must support Deletion. - "Recycle" means the volume will be recycled back into the pool of unbound persistent volumes on release from its claim. The volume plugin must support Recycling. - "Retain" means the volume will be left in its current phase (Released) for manual reclamation by the administrator. The default policy is Retain. photonPersistentDisk object Represents a Photon Controller persistent disk resource. portworxVolume object PortworxVolumeSource represents a Portworx volume resource. quobyte object Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling. rbd object Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling. scaleIO object ScaleIOPersistentVolumeSource represents a persistent ScaleIO volume storageClassName string storageClassName is the name of StorageClass to which this persistent volume belongs. Empty value means that this volume does not belong to any StorageClass. storageos object Represents a StorageOS persistent volume resource. volumeMode string volumeMode defines if a volume is intended to be used with a formatted filesystem or to remain in raw block state. Value of Filesystem is implied when not included in spec. Possible enum values: - "Block" means the volume will not be formatted with a filesystem and will remain a raw block device. - "Filesystem" means the volume will be or is formatted with a filesystem. vsphereVolume object Represents a vSphere volume resource. 5.1.2. .spec.awsElasticBlockStore Description Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 5.1.3. .spec.azureDisk Description AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. Possible enum values: - "None" - "ReadOnly" - "ReadWrite" diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared Possible enum values: - "Dedicated" - "Managed" - "Shared" readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 5.1.4. .spec.azureFile Description AzureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key secretNamespace string secretNamespace is the namespace of the secret that contains Azure Storage Account Name and Key default is the same as the Pod shareName string shareName is the azure Share Name 5.1.5. .spec.cephfs Description Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace user string user is Optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 5.1.6. .spec.cephfs.secretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.7. .spec.cinder Description Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. Type object Required volumeID Property Type Description fsType string fsType Filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 5.1.8. .spec.cinder.secretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.9. .spec.claimRef Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 5.1.10. .spec.csi Description Represents storage that is managed by an external CSI volume driver (Beta feature) Type object Required driver volumeHandle Property Type Description controllerExpandSecretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace controllerPublishSecretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace driver string driver is the name of the driver to use for this volume. Required. fsType string fsType to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". nodeExpandSecretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace nodePublishSecretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace nodeStageSecretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace readOnly boolean readOnly value to pass to ControllerPublishVolumeRequest. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes of the volume to publish. volumeHandle string volumeHandle is the unique volume name returned by the CSI volume plugin's CreateVolume to refer to the volume on all subsequent calls. Required. 5.1.11. .spec.csi.controllerExpandSecretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.12. .spec.csi.controllerPublishSecretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.13. .spec.csi.nodeExpandSecretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.14. .spec.csi.nodePublishSecretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.15. .spec.csi.nodeStageSecretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.16. .spec.fc Description Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 5.1.17. .spec.flexVolume Description FlexPersistentVolumeSource represents a generic persistent volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace 5.1.18. .spec.flexVolume.secretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.19. .spec.flocker Description Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 5.1.20. .spec.gcePersistentDisk Description Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 5.1.21. .spec.glusterfs Description Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling. Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod endpointsNamespace string endpointsNamespace is the namespace that contains Glusterfs endpoint. If this field is empty, the EndpointNamespace defaults to the same namespace as the bound PVC. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 5.1.22. .spec.hostPath Description Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath Possible enum values: - "" For backwards compatible, leave it empty if unset - "BlockDevice" A block device must exist at the given path - "CharDevice" A character device must exist at the given path - "Directory" A directory must exist at the given path - "DirectoryOrCreate" If nothing exists at the given path, an empty directory will be created there as needed with file mode 0755, having the same group and ownership with Kubelet. - "File" A file must exist at the given path - "FileOrCreate" If nothing exists at the given path, an empty file will be created there as needed with file mode 0644, having the same group and ownership with Kubelet. - "Socket" A UNIX socket must exist at the given path 5.1.23. .spec.iscsi Description ISCSIPersistentVolumeSource represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. Type object Required targetPortal iqn lun Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is Target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun is iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 5.1.24. .spec.iscsi.secretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.25. .spec.local Description Local represents directly-attached storage with node affinity (Beta feature) Type object Required path Property Type Description fsType string fsType is the filesystem type to mount. It applies only when the Path is a block device. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default value is to auto-select a filesystem if unspecified. path string path of the full path to the volume on the node. It can be either a directory or block device (disk, partition, ... ). 5.1.26. .spec.nfs Description Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling. Type object Required server path Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 5.1.27. .spec.nodeAffinity Description VolumeNodeAffinity defines constraints that limit what nodes this volume can be accessed from. Type object Property Type Description required object A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. 5.1.28. .spec.nodeAffinity.required Description A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 5.1.29. .spec.nodeAffinity.required.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 5.1.30. .spec.nodeAffinity.required.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 5.1.31. .spec.nodeAffinity.required.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 5.1.32. .spec.nodeAffinity.required.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 5.1.33. .spec.nodeAffinity.required.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 5.1.34. .spec.nodeAffinity.required.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 5.1.35. .spec.photonPersistentDisk Description Represents a Photon Controller persistent disk resource. Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 5.1.36. .spec.portworxVolume Description PortworxVolumeSource represents a Portworx volume resource. Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 5.1.37. .spec.quobyte Description Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling. Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 5.1.38. .spec.rbd Description Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling. Type object Required monitors image Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 5.1.39. .spec.rbd.secretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.40. .spec.scaleIO Description ScaleIOPersistentVolumeSource represents a persistent ScaleIO volume Type object Required gateway system secretRef Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs" gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace sslEnabled boolean sslEnabled is the flag to enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 5.1.41. .spec.scaleIO.secretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.42. .spec.storageos Description Represents a StorageOS persistent volume resource. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object ObjectReference contains enough information to let you inspect or modify the referred object. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 5.1.43. .spec.storageos.secretRef Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 5.1.44. .spec.vsphereVolume Description Represents a vSphere volume resource. Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 5.1.45. .status Description PersistentVolumeStatus is the current status of a persistent volume. Type object Property Type Description lastPhaseTransitionTime Time lastPhaseTransitionTime is the time the phase transitioned from one to another and automatically resets to current time everytime a volume phase transitions. This is an alpha field and requires enabling PersistentVolumeLastPhaseTransitionTime feature. message string message is a human-readable message indicating details about why the volume is in this state. phase string phase indicates if a volume is available, bound to a claim, or released by a claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#phase Possible enum values: - "Available" used for PersistentVolumes that are not yet bound Available volumes are held by the binder and matched to PersistentVolumeClaims - "Bound" used for PersistentVolumes that are bound - "Failed" used for PersistentVolumes that failed to be correctly recycled or deleted after being released from a claim - "Pending" used for PersistentVolumes that are not available - "Released" used for PersistentVolumes where the bound PersistentVolumeClaim was deleted released volumes must be recycled before becoming available again this phase is used by the persistent volume claim binder to signal to another process to reclaim the resource reason string reason is a brief CamelCase string that describes any failure and is meant for machine parsing and tidy display in the CLI. 5.2. API endpoints The following API endpoints are available: /api/v1/persistentvolumes DELETE : delete collection of PersistentVolume GET : list or watch objects of kind PersistentVolume POST : create a PersistentVolume /api/v1/watch/persistentvolumes GET : watch individual changes to a list of PersistentVolume. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/persistentvolumes/{name} DELETE : delete a PersistentVolume GET : read the specified PersistentVolume PATCH : partially update the specified PersistentVolume PUT : replace the specified PersistentVolume /api/v1/watch/persistentvolumes/{name} GET : watch changes to an object of kind PersistentVolume. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/persistentvolumes/{name}/status GET : read status of the specified PersistentVolume PATCH : partially update status of the specified PersistentVolume PUT : replace status of the specified PersistentVolume 5.2.1. /api/v1/persistentvolumes HTTP method DELETE Description delete collection of PersistentVolume Table 5.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind PersistentVolume Table 5.3. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeList schema 401 - Unauthorized Empty HTTP method POST Description create a PersistentVolume Table 5.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.5. Body parameters Parameter Type Description body PersistentVolume schema Table 5.6. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 201 - Created PersistentVolume schema 202 - Accepted PersistentVolume schema 401 - Unauthorized Empty 5.2.2. /api/v1/watch/persistentvolumes HTTP method GET Description watch individual changes to a list of PersistentVolume. deprecated: use the 'watch' parameter with a list operation instead. Table 5.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.3. /api/v1/persistentvolumes/{name} Table 5.8. Global path parameters Parameter Type Description name string name of the PersistentVolume HTTP method DELETE Description delete a PersistentVolume Table 5.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.10. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 202 - Accepted PersistentVolume schema 401 - Unauthorized Empty HTTP method GET Description read the specified PersistentVolume Table 5.11. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PersistentVolume Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 201 - Created PersistentVolume schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PersistentVolume Table 5.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.15. Body parameters Parameter Type Description body PersistentVolume schema Table 5.16. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 201 - Created PersistentVolume schema 401 - Unauthorized Empty 5.2.4. /api/v1/watch/persistentvolumes/{name} Table 5.17. Global path parameters Parameter Type Description name string name of the PersistentVolume HTTP method GET Description watch changes to an object of kind PersistentVolume. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 5.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.5. /api/v1/persistentvolumes/{name}/status Table 5.19. Global path parameters Parameter Type Description name string name of the PersistentVolume HTTP method GET Description read status of the specified PersistentVolume Table 5.20. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified PersistentVolume Table 5.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.22. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 201 - Created PersistentVolume schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified PersistentVolume Table 5.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.24. Body parameters Parameter Type Description body PersistentVolume schema Table 5.25. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 201 - Created PersistentVolume schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/storage_apis/persistentvolume-v1
Chapter 31. Configuring a system for session recording by using RHEL system roles
Chapter 31. Configuring a system for session recording by using RHEL system roles Use the tlog RHEL system role to record and monitor terminal session activities on your managed nodes in an automatic fashion. You can configure the recording to take place per user or user group by means of the SSSD service. The session recording solution in the tlog RHEL system role consists of the following components: The tlog utility System Security Services Daemon (SSSD) Optional: The web console interface 31.1. Configuring session recording for individual users by using the tlog RHEL system role Prepare and apply an Ansible playbook to configure a RHEL system to log session recording data to the systemd journal. With that, you can enable recording the terminal output and input of a specific user during their sessions, when the user logs in on the console, or by SSH. The playbook installs tlog-rec-session , a terminal session I/O logging program, that acts as the login shell for a user. The role creates an SSSD configuration drop file, and this file defines for which users and groups the login shell should be used. Additionally, if the cockpit package is installed on the system, the playbook also installs the cockpit-session-recording package, which is a Cockpit module that allows you to view and play recordings in the web console interface. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Deploy session recording hosts: managed-node-01.example.com tasks: - name: Enable session recording for specific users ansible.builtin.include_role: name: rhel-system-roles.tlog vars: tlog_scope_sssd: some tlog_users_sssd: - <recorded_user> tlog_scope_sssd: <value> The some value specifies you want to record only certain users and groups, not all or none . tlog_users_sssd:: <list_of_users> A YAML list of users you want to record a session from. Note that the role does not add users if they do not exist. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Check the SSSD drop-in file's content: You can see that the file contains the parameters you set in the playbook. Log in as a user whose session will be recorded, perform some actions, and log out. As the root user: Display the list of recorded sessions: You require the value of the rec (recording ID) field in the step. Note that the value of the _COMM field is shortened due to a 15 character limit. Play back a session: Additional resources /usr/share/ansible/roles/rhel-system-roles.tlog/README.md file /usr/share/doc/rhel-system-roles/tlog/ directory 31.2. Excluding certain users and groups from session recording by using the the tlog RHEL system role You can use the tlog_exclude_users_sssd and tlog_exclude_groups_sssd role variables from the tlog RHEL system role to exclude users or groups from having their sessions recorded and logged in the systemd journal. The playbook installs tlog-rec-session , a terminal session I/O logging program, that acts as the login shell for a user. The role creates an SSSD configuration drop file, and this file defines for which users and groups the login shell should be used. Additionally, if the cockpit package is installed on the system, the playbook also installs the cockpit-session-recording package, which is a Cockpit module that allows you to view and play recordings in the web console interface. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Deploy session recording excluding users and groups hosts: managed-node-01.example.com tasks: - name: Exclude users and groups ansible.builtin.include_role: name: rhel-system-roles.tlog vars: tlog_scope_sssd: all tlog_exclude_users_sssd: - jeff - james tlog_exclude_groups_sssd: - admins tlog_scope_sssd: <value> The value all specifies that you want to record all users and groups. tlog_exclude_users_sssd: <user_list> A YAML list of users user names you want to exclude from the session recording. tlog_exclude_groups_sssd: <group_list> A YAML list of groups you want to exclude from the session recording. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Check the SSSD drop-in file's content: You can see that the file contains the parameters you set in the playbook. Log in as a user whose session will be recorded, perform some actions, and log out. As the root user: Display the list of recorded sessions: You require the value of the rec (recording ID) field in the step. Note that the value of the _COMM field is shortened due to a 15 character limit. Play back a session: Additional resources /usr/share/ansible/roles/rhel-system-roles.tlog/README.md file /usr/share/doc/rhel-system-roles/tlog/ directory
[ "--- - name: Deploy session recording hosts: managed-node-01.example.com tasks: - name: Enable session recording for specific users ansible.builtin.include_role: name: rhel-system-roles.tlog vars: tlog_scope_sssd: some tlog_users_sssd: - <recorded_user>", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "cd /etc/sssd/conf.d/sssd-session-recording.conf", "journalctl _COMM=tlog-rec-sessio Nov 12 09:17:30 managed-node-01.example.com -tlog-rec-session[1546]: {\"ver\":\"2.3\",\"host\":\"managed-node-01.example.com\",\"rec\":\"07418f2b0f334c1696c10cbe6f6f31a6-60a-e4a2\",\"user\":\"demo-user\",", "tlog-play -r journal -M TLOG_REC= <recording_id>", "--- - name: Deploy session recording excluding users and groups hosts: managed-node-01.example.com tasks: - name: Exclude users and groups ansible.builtin.include_role: name: rhel-system-roles.tlog vars: tlog_scope_sssd: all tlog_exclude_users_sssd: - jeff - james tlog_exclude_groups_sssd: - admins", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "cat /etc/sssd/conf.d/sssd-session-recording.conf", "journalctl _COMM=tlog-rec-sessio Nov 12 09:17:30 managed-node-01.example.com -tlog-rec-session[1546]: {\"ver\":\"2.3\",\"host\":\"managed-node-01.example.com\",\"rec\":\"07418f2b0f334c1696c10cbe6f6f31a6-60a-e4a2\",\"user\":\"demo-user\",", "tlog-play -r journal -M TLOG_REC= <recording_id>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/automating_system_administration_by_using_rhel_system_roles/configuring-a-system-for-session-recording-using-the-tlog-rhel-system-roles_automating-system-administration-by-using-rhel-system-roles
Chapter 1. Preparing to install on a single node
Chapter 1. Preparing to install on a single node 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You have read the documentation on selecting a cluster installation method and preparing it for users . 1.2. About OpenShift on a single node You can create a single-node cluster with standard installation methods. OpenShift Container Platform on a single node is a specialized installation that requires the creation of a special ignition configuration ISO. The primary use case is for edge computing workloads, including intermittent connectivity, portable clouds, and 5G radio access networks (RAN) close to a base station. The major tradeoff with an installation on a single node is the lack of high availability. Important The use of OpenShiftSDN with single-node OpenShift is not supported. OVN-Kubernetes is the default network plugin for single-node OpenShift deployments. 1.3. Requirements for installing OpenShift on a single node Installing OpenShift Container Platform on a single node alleviates some of the requirements for high availability and large scale clusters. However, you must address the following requirements: Administration host: You must have a computer to prepare the ISO, to create the USB boot drive, and to monitor the installation. CPU Architecture: Installing OpenShift Container Platform on a single node supports x86_64 and arm64 CPU architectures. Supported platforms: Installing OpenShift Container Platform on a single node is supported on bare metal and Certified third-party hypervisors . In most cases, you must specify the platform.none: {} parameter in the install-config.yaml configuration file. The following list shows the only exceptions and the corresponding parameter to specify in the install-config.yaml configuration file: Amazon Web Services (AWS), where you use platform=aws Google Cloud Platform (GCP), where you use platform=gcp Microsoft Azure, where you use platform=azure Production-grade server: Installing OpenShift Container Platform on a single node requires a server with sufficient resources to run OpenShift Container Platform services and a production workload. Table 1.1. Minimum resource requirements Profile vCPU Memory Storage Minimum 8 vCPUs 16GB of RAM 120GB Note One vCPU equals one physical core. However, if you enable simultaneous multithreading (SMT), or Hyper-Threading, use the following formula to calculate the number of vCPUs that represent one physical core: (threads per core x cores) x sockets = vCPUs Adding Operators during the installation process might increase the minimum resource requirements. The server must have a Baseboard Management Controller (BMC) when booting with virtual media. Networking: The server must have access to the internet or access to a local registry if it is not connected to a routable network. The server must have a DHCP reservation or a static IP address for the Kubernetes API, ingress route, and cluster node domain names. You must configure the DNS to resolve the IP address to each of the following fully qualified domain names (FQDN): Table 1.2. Required DNS records Usage FQDN Description Kubernetes API api.<cluster_name>.<base_domain> Add a DNS A/AAAA or CNAME record. This record must be resolvable by both clients external to the cluster and within the cluster. Internal API api-int.<cluster_name>.<base_domain> Add a DNS A/AAAA or CNAME record when creating the ISO manually. This record must be resolvable by nodes within the cluster. Ingress route *.apps.<cluster_name>.<base_domain> Add a wildcard DNS A/AAAA or CNAME record that targets the node. This record must be resolvable by both clients external to the cluster and within the cluster. Important Without persistent IP addresses, communications between the apiserver and etcd might fail.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_a_single_node/preparing-to-install-sno
Chapter 6. Planning your overcloud
Chapter 6. Planning your overcloud The following section contains some guidelines for planning various aspects of your Red Hat OpenStack Platform (RHOSP) environment. This includes defining node roles, planning your network topology, and storage. Important Do not rename your overcloud nodes after they have been deployed. Renaming a node after deployment creates issues with instance management. 6.1. Node roles Director includes the following default node types to build your overcloud: Controller Provides key services for controlling your environment. This includes the dashboard (horizon), authentication (keystone), image storage (glance), networking (neutron), orchestration (heat), and high availability services. A Red Hat OpenStack Platform (RHOSP) environment requires three Controller nodes for a highly available production-level environment. Note Use environments with one Controller node only for testing purposes, not for production. Environments with two Controller nodes or more than three Controller nodes are not supported. Compute A physical server that acts as a hypervisor and contains the processing capabilities required to run virtual machines in the environment. A basic RHOSP environment requires at least one Compute node. Ceph Storage A host that provides Red Hat Ceph Storage. Additional Ceph Storage hosts scale into a cluster. This deployment role is optional. Swift Storage A host that provides external object storage to the OpenStack Object Storage (swift) service. This deployment role is optional. The following table contains some examples of different overclouds and defines the node types for each scenario. Table 6.1. Node Deployment Roles for Scenarios Controller Compute Ceph Storage Swift Storage Total Small overcloud 3 1 - - 4 Medium overcloud 3 3 - - 6 Medium overcloud with additional object storage 3 3 - 3 9 Medium overcloud with Ceph Storage cluster 3 3 3 - 9 In addition, consider whether to split individual services into custom roles. For more information about the composable roles architecture, see "Composable Services and Custom Roles" in the Advanced Overcloud Customization guide. Table 6.2. Node Deployment Roles for Proof of Concept Deployment Undercloud Controller Compute Ceph Storage Total Proof of concept 1 1 1 1 4 Warning The Red Hat OpenStack Platform maintains an operational Ceph Storage cluster during day-2 operations. Therefore, some day-2 operations, such as upgrades or minor updates of the Ceph Storage cluster, are not possible in deployments with fewer than three MONs or three storage nodes. If you use a single Controller node or a single Ceph Storage node, day-2 operations will fail. 6.2. Overcloud networks It is important to plan the networking topology and subnets in your environment so that you can map roles and services to communicate with each other correctly. Red Hat OpenStack Platform (RHOSP) uses the Openstack Networking (neutron) service, which operates autonomously and manages software-based networks, static and floating IP addresses, and DHCP. By default, director configures nodes to use the Provisioning / Control Plane for connectivity. However, it is possible to isolate network traffic into a series of composable networks, that you can customize and assign services. In a typical RHOSP installation, the number of network types often exceeds the number of physical network links. To connect all the networks to the proper hosts, the overcloud uses VLAN tagging to deliver more than one network on each interface. Most of the networks are isolated subnets but some networks require a Layer 3 gateway to provide routing for Internet access or infrastructure network connectivity. If you use VLANs to isolate your network traffic types, you must use a switch that supports 802.1Q standards to provide tagged VLANs. Note You create project (tenant) networks using VLANs. You can create Geneve or VXLAN tunnels for special-use networks without consuming project VLANs. Red Hat recommends that you deploy a project network tunneled with Geneve or VXLAN, even if you intend to deploy your overcloud in neutron VLAN mode with tunneling disabled. If you deploy a project network tunneled with Geneve or VXLAN, you can still update your environment to use tunnel networks as utility networks or virtualization networks. It is possible to add Geneve or VXLAN capability to a deployment with a project VLAN, but it is not possible to add a project VLAN to an existing overcloud without causing disruption. Director also includes a set of templates that you can use to configure NICs with isolated composable networks. The following configurations are the default configurations: Single NIC configuration - One NIC for the Provisioning network on the native VLAN and tagged VLANs that use subnets for the different overcloud network types. Bonded NIC configuration - One NIC for the Provisioning network on the native VLAN and two NICs in a bond for tagged VLANs for the different overcloud network types. Multiple NIC configuration - Each NIC uses a subnet for a different overcloud network type. You can also create your own templates to map a specific NIC configuration. The following details are also important when you consider your network configuration: During the overcloud creation, you refer to NICs using a single name across all overcloud machines. Ideally, you should use the same NIC on each overcloud node for each respective network to avoid confusion. For example, use the primary NIC for the Provisioning network and the secondary NIC for the OpenStack services. Set all overcloud systems to PXE boot off the Provisioning NIC, and disable PXE boot on the External NIC and any other NICs on the system. Also ensure that the Provisioning NIC has PXE boot at the top of the boot order, ahead of hard disks and CD/DVD drives. All overcloud bare metal systems require a supported power management interface, such as an Intelligent Platform Management Interface (IPMI), so that director can control the power management of each node. Make a note of the following details for each overcloud system: the MAC address of the Provisioning NIC, the IP address of the IPMI NIC, IPMI username, and IPMI password. This information is useful later when you configure the overcloud nodes. If an instance must be accessible from the external internet, you can allocate a floating IP address from a public network and associate the floating IP with an instance. The instance retains its private IP but network traffic uses NAT to traverse through to the floating IP address. Note that a floating IP address can be assigned only to a single instance rather than multiple private IP addresses. However, the floating IP address is reserved for use only by a single tenant, which means that the tenant can associate or disassociate the floating IP address with a particular instance as required. This configuration exposes your infrastructure to the external internet and you must follow suitable security practices. To mitigate the risk of network loops in Open vSwitch, only a single interface or a single bond can be a member of a given bridge. If you require multiple bonds or interfaces, you can configure multiple bridges. Red Hat recommends using DNS hostname resolution so that your overcloud nodes can connect to external services, such as the Red Hat Content Delivery Network and network time servers. Red Hat recommends that the Provisioning interface, External interface, and any floating IP interfaces be left at the default MTU of 1500. Connectivity problems are likely to occur otherwise. This is because routers typically cannot forward jumbo frames across Layer 3 boundaries. Note You can virtualize the overcloud control plane if you are using Red Hat Virtualization (RHV). For more information, see Creating virtualized control planes . 6.3. Overcloud storage Note Using LVM on a guest instance that uses a back end cinder-volume of any driver or back-end type results in issues with performance, volume visibility and availability, and data corruption. Use an LVM filter to mitigate visibility, availability, and data corruption issues. For more information, see section 2 Block Storage and Volumes in the Storage Guide and KCS article 3213311, "Using LVM on a cinder volume exposes the data to the compute host." Director includes different storage options for the overcloud environment: Ceph Storage nodes Director creates a set of scalable storage nodes using Red Hat Ceph Storage. The overcloud uses these nodes for the following storage types: Images - The Image service (glance) manages images for virtual machines. Images are immutable. OpenStack treats images as binary blobs and downloads them accordingly. You can use the Image service (glance) to store images in a Ceph Block Device. Volumes - OpenStack manages volumes with the Block Storage service (cinder). The Block Storage service (cinder) volumes are block devices. OpenStack uses volumes to boot virtual machines, or to attach volumes to running virtual machines. You can use the Block Storage service to boot a virtual machine using a copy-on-write clone of an image. File Systems - Openstack manages shared file systems with the Shared File Systems service (manila). Shares are backed by file systems. You can use manila to manage shares backed by a CephFS file system with data on the Ceph Storage nodes. Guest Disks - Guest disks are guest operating system disks. By default, when you boot a virtual machine with the Compute service (nova), the virtual machine disk appears as a file on the filesystem of the hypervisor (usually under /var/lib/nova/instances/<uuid>/ ). Every virtual machine inside Ceph can be booted without using the Block Storage service (cinder). As a result, you can perform maintenance operations easily with the live-migration process. Additionally, if your hypervisor fails, it is also convenient to trigger nova evacuate and run the virtual machine elsewhere. Important For information about supported image formats, see Image Service in the Creating and Managing Images guide. For more information about Ceph Storage, see the Red Hat Ceph Storage Architecture Guide . Swift Storage nodes Director creates an external object storage node. This is useful in situations where you need to scale or replace Controller nodes in your overcloud environment but need to retain object storage outside of a high availability cluster. 6.4. Overcloud security Your OpenStack Platform implementation is only as secure as your environment. Follow good security principles in your networking environment to ensure that you control network access properly: Use network segmentation to mitigate network movement and isolate sensitive data. A flat network is much less secure. Restrict services access and ports to a minimum. Enforce proper firewall rules and password usage. Ensure that SELinux is enabled. For more information about securing your system, see the following Red Hat guides: Security Hardening for Red Hat Enterprise Linux 8 Using SELinux for Red Hat Enterprise Linux 8 6.5. Overcloud high availability To deploy a highly-available overcloud, director configures multiple Controller, Compute and Storage nodes to work together as a single cluster. In case of node failure, an automated fencing and re-spawning process is triggered based on the type of node that failed. For more information about overcloud high availability architecture and services, see High Availability Deployment and Usage . Note Deploying a highly available overcloud without STONITH is not supported. You must configure a STONITH device for each node that is a part of the Pacemaker cluster in a highly available overcloud. For more information on STONITH and Pacemaker, see Fencing in a Red Hat High Availability Cluster and Support Policies for RHEL High Availability Clusters . You can also configure high availability for Compute instances with director (Instance HA). This high availability mechanism automates evacuation and re-spawning of instances on Compute nodes in case of node failure. The requirements for Instance HA are the same as the general overcloud requirements, but you must perform a few additional steps to prepare your environment for the deployment. For more information about Instance HA and installation instructions, see the High Availability for Compute Instances guide. 6.6. Controller node requirements Controller nodes host the core services in a Red Hat OpenStack Platform environment, such as the Dashboard (horizon), the back-end database server, the Identity service (keystone) authentication, and high availability services. Processor 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions. Memory The minimum amount of memory is 32 GB. However, the amount of recommended memory depends on the number of vCPUs, which is based on the number of CPU cores multiplied by hyper-threading value. Use the following calculations to determine your RAM requirements: Controller RAM minimum calculation: Use 1.5 GB of memory for each vCPU. For example, a machine with 48 vCPUs should have 72 GB of RAM. Controller RAM recommended calculation: Use 3 GB of memory for each vCPU. For example, a machine with 48 vCPUs should have 144 GB of RAM For more information about measuring memory requirements, see "Red Hat OpenStack Platform Hardware Requirements for Highly Available Controllers" on the Red Hat Customer Portal. Disk Storage and layout A minimum amount of 50 GB storage is required if the Object Storage service (swift) is not running on the Controller nodes. However, the Telemetry and Object Storage services are both installed on the Controllers, with both configured to use the root disk. These defaults are suitable for deploying small overclouds built on commodity hardware. These environments are typical of proof-of-concept and test environments. You can use these defaults to deploy overclouds with minimal planning, but they offer little in terms of workload capacity and performance. In an enterprise environment, however, the defaults could cause a significant bottleneck because Telemetry accesses storage constantly. This results in heavy disk I/O usage, which severely impacts the performance of all other Controller services. In this type of environment, you must plan your overcloud and configure it accordingly. Red Hat provides several configuration recommendations for both Telemetry and Object Storage. For more information, see Deployment Recommendations for Specific Red Hat OpenStack Platform Services . Network Interface Cards A minimum of 2 x 1 Gbps Network Interface Cards. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic. Power management Each Controller node requires a supported power management interface, such as an Intelligent Platform Management Interface (IPMI) functionality, on the server motherboard. Virtualization support Red Hat supports virtualized Controller nodes only on Red Hat Virtualization platforms. For more information, see Creating virtualized control planes . 6.7. Compute node requirements Compute nodes are responsible for running virtual machine instances after they are launched. Compute nodes require bare metal systems that support hardware virtualization. Compute nodes must also have enough memory and disk space to support the requirements of the virtual machine instances that they host. Processor 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions, and the AMD-V or Intel VT hardware virtualization extensions enabled. It is recommended that this processor has a minimum of 4 cores. IBM POWER 8 processor. Memory A minimum of 6 GB of RAM for the host operating system, plus additional memory to accommodate for the following considerations: Add additional memory that you intend to make available to virtual machine instances. Add additional memory to run special features or additional resources on the host, such as additional kernel modules, virtual switches, monitoring solutions, and other additional background tasks. If you intend to use non-uniform memory access (NUMA), Red Hat recommends 8GB per CPU socket node or 16 GB per socket node if you have more then 256 GB of physical RAM. Configure at least 4 GB of swap space. Disk space A minimum of 50 GB of available disk space. Network Interface Cards A minimum of one 1 Gbps Network Interface Cards, although it is recommended to use at least two NICs in a production environment. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic. Power management Each Compute node requires a supported power management interface, such as an Intelligent Platform Management Interface (IPMI) functionality, on the server motherboard. 6.8. Ceph Storage node requirements If you use Red Hat OpenStack Platform (RHOSP) director to create Red Hat Ceph Storage nodes, there are additional requirements. For information about how to select a processor, memory, network interface cards (NICs), and disk layout for Ceph Storage nodes, see Hardware selection recommendations for Red Hat Ceph Storage in the Red Hat Ceph Storage Hardware Guide . Each Ceph Storage node also requires a supported power management interface, such as Intelligent Platform Management Interface (IPMI) functionality, on the motherboard of the server. Note RHOSP director uses ceph-ansible , which does not support installing the OSD on the root disk of Ceph Storage nodes. This means that you need at least two disks for a supported Ceph Storage node. Ceph Storage nodes and RHEL compatibility RHOSP 16.2 is supported on RHEL 8.4. Before upgrading to RHOSP 16.1 and later, review the Red Hat Knowledgebase article Red Hat Ceph Storage: Supported configurations . Red Hat Ceph Storage compatibility RHOSP 16.2 supports Red Hat Ceph Storage 4. Placement Groups (PGs) Ceph Storage uses placement groups (PGs) to facilitate dynamic and efficient object tracking at scale. In the case of OSD failure or cluster rebalancing, Ceph can move or replicate a placement group and its contents, which means a Ceph Storage cluster can rebalance and recover efficiently. The default placement group count that director creates is not always optimal, so it is important to calculate the correct placement group count according to your requirements. You can use the placement group calculator to calculate the correct count. To use the PG calculator, enter the predicted storage usage per service as a percentage, as well as other properties about your Ceph cluster, such as the number OSDs. The calculator returns the optimal number of PGs per pool. For more information, see Placement Groups (PGs) per Pool Calculator . Auto-scaling is an alternative way to manage placement groups. With the auto-scale feature, you set the expected Ceph Storage requirements per service as a percentage instead of a specific number of placement groups. Ceph automatically scales placement groups based on how the cluster is used. For more information, see Auto-scaling placement groups in the Red Hat Ceph Storage Strategies Guide . Processor 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions. Network Interface Cards A minimum of one 1 Gbps Network Interface Cards (NICs), although Red Hat recommends that you use at least two NICs in a production environment. Use additional NICs for bonded interfaces or to delegate tagged VLAN traffic. Use a 10 Gbps interface for storage nodes, especially if you want to create a Red Hat OpenStack Platform (RHOSP) environment that serves a high volume of traffic. Power management Each Controller node requires a supported power management interface, such as Intelligent Platform Management Interface (IPMI) functionality on the motherboard of the server. For more information about installing an overcloud with a Ceph Storage cluster, see the Deploying an Overcloud with Containerized Red Hat Ceph guide. 6.9. Object Storage node requirements Object Storage nodes provide an object storage layer for the overcloud. The Object Storage proxy is installed on Controller nodes. The storage layer requires bare metal nodes with multiple disks on each node. Processor 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions. Memory Memory requirements depend on the amount of storage space. Use at minimum 1 GB of memory for each 1 TB of hard disk space. For optimal performance, it is recommended to use 2 GB for each 1 TB of hard disk space, especially for workloads with files smaller than 100GB. Disk space Storage requirements depend on the capacity needed for the workload. It is recommended to use SSD drives to store the account and container data. The capacity ratio of account and container data to objects is approximately 1 per cent. For example, for every 100TB of hard drive capacity, provide 1TB of SSD capacity for account and container data. However, this depends on the type of stored data. If you want to store mostly small objects, provide more SSD space. For large objects (videos, backups), use less SSD space. Disk layout The recommended node configuration requires a disk layout similar to the following example: /dev/sda - The root disk. Director copies the main overcloud image to the disk. /dev/sdb - Used for account data. /dev/sdc - Used for container data. /dev/sdd and onward - The object server disks. Use as many disks as necessary for your storage requirements. Network Interface Cards A minimum of 2 x 1 Gbps Network Interface Cards. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic. Power management Each Controller node requires a supported power management interface, such as an Intelligent Platform Management Interface (IPMI) functionality, on the server motherboard. 6.10. Overcloud repositories Red Hat OpenStack Platform (RHOSP) 16.2 runs on Red Hat Enterprise Linux (RHEL) 8.4. As a result, you must lock the content from these repositories to the respective RHEL version. Note If you synchronize repositories by using Red Hat Satellite, you can enable specific versions of the RHEL repositories. However, the repository label remains the same despite the version you choose. For example, if you enable the 8.4 version of the BaseOS repository, the repository name includes the specific version that you enabled, but the repository label is still rhel-8-for-x86_64-baseos-eus-rpms . The advanced-virt-for-rhel-8-x86_64-rpms and advanced-virt-for-rhel-8-x86_64-eus-rpms repositories are no longer required. To disable these repositories, see the Red Hat Knowledgebase solution advanced-virt-for-rhel-8-x86_64-rpms are no longer required in OSP 16.2 . Warning Any repositories outside the ones specified here are not supported. Unless recommended, do not enable any other products or repositories outside the ones listed in the following tables or else you might encounter package dependency issues. Do not enable Extra Packages for Enterprise Linux (EPEL). Controller node repositories The following table lists core repositories for Controller nodes in the overcloud. Name Repository Description of requirement Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) Extended Update Support (EUS) rhel-8-for-x86_64-baseos-eus-rpms Base operating system repository for x86_64 systems. Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) rhel-8-for-x86_64-appstream-eus-rpms Contains RHOSP dependencies. Red Hat Enterprise Linux 8 for x86_64 - High Availability (RPMs) Extended Update Support (EUS) rhel-8-for-x86_64-highavailability-eus-rpms High availability tools for RHEL. Red Hat Ansible Engine 2.9 for RHEL 8 x86_64 (RPMs) ansible-2.9-for-rhel-8-x86_64-rpms Ansible Engine for RHEL. Used to provide the latest version of Ansible. Red Hat OpenStack Platform 16.2 for RHEL 8 (RPMs) openstack-16.2-for-rhel-8-x86_64-rpms Core RHOSP repository. Red Hat Fast Datapath for RHEL 8 (RPMS) fast-datapath-for-rhel-8-x86_64-rpms Provides Open vSwitch (OVS) packages for OpenStack Platform. Red Hat Ceph Storage Tools 4 for RHEL 8 x86_64 (RPMs) rhceph-4-tools-for-rhel-8-x86_64-rpms Tools for Red Hat Ceph Storage 4 for RHEL 8. Compute and ComputeHCI node repositories The following table lists core repositories for Compute and ComputeHCI nodes in the overcloud. Name Repository Description of requirement Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) Extended Update Support (EUS) rhel-8-for-x86_64-baseos-eus-rpms Base operating system repository for x86_64 systems. Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) rhel-8-for-x86_64-appstream-eus-rpms Contains RHOSP dependencies. Red Hat Enterprise Linux 8 for x86_64 - High Availability (RPMs) Extended Update Support (EUS) rhel-8-for-x86_64-highavailability-eus-rpms High availability tools for RHEL. Red Hat Ansible Engine 2.9 for RHEL 8 x86_64 (RPMs) ansible-2.9-for-rhel-8-x86_64-rpms Ansible Engine for RHEL. Used to provide the latest version of Ansible. Red Hat OpenStack Platform 16.2 for RHEL 8 (RPMs) openstack-16.2-for-rhel-8-x86_64-rpms Core RHOSP repository. Red Hat Fast Datapath for RHEL 8 (RPMS) fast-datapath-for-rhel-8-x86_64-rpms Provides Open vSwitch (OVS) packages for OpenStack Platform. Red Hat Ceph Storage Tools 4 for RHEL 8 x86_64 (RPMs) rhceph-4-tools-for-rhel-8-x86_64-rpms Tools for Red Hat Ceph Storage 4 for RHEL 8. Real Time Compute repositories The following table lists repositories for Real Time Compute (RTC) functionality. Name Repository Description of requirement Red Hat Enterprise Linux 8 for x86_64 - Real Time (RPMs) rhel-8-for-x86_64-rt-rpms Repository for Real Time KVM (RT-KVM). Contains packages to enable the real time kernel. Enable this repository for all Compute nodes targeted for RT-KVM. NOTE: You need a separate subscription to a Red Hat OpenStack Platform for Real Time SKU to access this repository. Red Hat Enterprise Linux 8 for x86_64 - Real Time for NFV (RPMs) rhel-8-for-x86_64-nfv-rpms Repository for Real Time KVM (RT-KVM) for NFV. Contains packages to enable the real time kernel. Enable this repository for all NFV Compute nodes targeted for RT-KVM. NOTE: You need a separate subscription to a Red Hat OpenStack Platform for Real Time SKU to access this repository. Ceph Storage node repositories The following table lists Ceph Storage related repositories for the overcloud. Name Repository Description of requirement Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) rhel-8-for-x86_64-baseos-rpms Base operating system repository for x86_64 systems. Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) rhel-8-for-x86_64-appstream-rpms Contains RHOSP dependencies. Red Hat Enterprise Linux 8 for x86_64 - High Availability (RPMs) Extended Update Support (EUS) rhel-8-for-x86_64-highavailability-eus-rpms High availability tools for RHEL. NOTE: If you used the overcloud-full image for your Ceph Storage role, you must enable this repository. Ceph Storage roles should use the overcloud-minimal image, which does not require this repository. Red Hat Ansible Engine 2.9 for RHEL 8 x86_64 (RPMs) ansible-2.9-for-rhel-8-x86_64-rpms Ansible Engine for RHEL. Used to provide the latest version of Ansible. Red Hat OpenStack Platform 16.2 Director Deployment Tools for RHEL 8 x86_64 (RPMs) openstack-16.2-deployment-tools-for-rhel-8-x86_64-rpms Packages to help director configure Ceph Storage nodes. This repository is included with standalone Ceph Storage subscriptions. If you use a combined OpenStack Platform and Ceph Storage subscription, use the openstack-16.2-for-rhel-8-x86_64-rpms repository. Red Hat OpenStack Platform 16.2 for RHEL 8 (RPMs) openstack-16.2-for-rhel-8-x86_64-rpms Packages to help director configure Ceph Storage nodes. This repository is included with combined OpenStack Platform and Ceph Storage subscriptions. If you use a standalone Ceph Storage subscription, use the openstack-16.2-deployment-tools-for-rhel-8-x86_64-rpms repository. Red Hat Ceph Storage Tools 4 for RHEL 8 x86_64 (RPMs) rhceph-4-tools-for-rhel-8-x86_64-rpms Provides tools for nodes to communicate with the Ceph Storage cluster. Red Hat Fast Datapath for RHEL 8 (RPMS) fast-datapath-for-rhel-8-x86_64-rpms Provides Open vSwitch (OVS) packages for OpenStack Platform. If you are using OVS on Ceph Storage nodes, add this repository to the network interface configuration (NIC) templates. IBM POWER repositories The following table lists repositories for RHOSP on POWER PC architecture. Use these repositories in place of equivalents in the Core repositories. Name Repository Description of requirement Red Hat Enterprise Linux for IBM Power, little endian - BaseOS (RPMs) rhel-8-for-ppc64le-baseos-rpms Base operating system repository for ppc64le systems. Red Hat Enterprise Linux 8 for IBM Power, little endian - AppStream (RPMs) rhel-8-for-ppc64le-appstream-rpms Contains RHOSP dependencies. Red Hat Enterprise Linux 8 for IBM Power, little endian - High Availability (RPMs) rhel-8-for-ppc64le-highavailability-rpms High availability tools for RHEL. Used for Controller node high availability. Red Hat Fast Datapath for RHEL 8 IBM Power, little endian (RPMS) fast-datapath-for-rhel-8-ppc64le-rpms Provides Open vSwitch (OVS) packages for OpenStack Platform. Red Hat Ansible Engine 2.9 for RHEL 8 IBM Power, little endian (RPMs) ansible-2.9-for-rhel-8-ppc64le-rpms Ansible Engine for RHEL. Used to provide the latest version of Ansible. Red Hat OpenStack Platform 16.2 for RHEL 8 (RPMs) openstack-16.2-for-rhel-8-ppc64le-rpms Core RHOSP repository for ppc64le systems. 6.11. Provisioning methods There are three main methods that you can use to provision the nodes for your Red Hat OpenStack Platform environment: Provisioning with director Red Hat OpenStack Platform director is the standard provisioning method. In this scenario, the openstack overcloud deploy command performs both the provisioning and the configuration of your deployment. For more information about the standard provisioning and deployment method, see Chapter 7, Configuring a basic overcloud . Provisioning with the OpenStack Bare Metal (ironic) service In this scenario, you can separate the provisioning and configuration stages of the standard director deployment into two distinct processes. This is useful if you want to mitigate some of the risk involved with the standard director deployment and identify points of failure more efficiently. For more information about this scenario, see Chapter 8, Provisioning bare metal nodes before deploying the overcloud . Important This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . Provisioning with an external tool In this scenario, director controls the overcloud configuration on nodes that you pre-provision with an external tool. This is useful if you want to create an overcloud without power management control, use networks that have DHCP/PXE boot restrictions, or if you want to use nodes that have a custom partitioning layout that does not rely on the QCOW2 overcloud-full image. This scenario does not use the OpenStack Compute (nova), OpenStack Bare Metal (ironic), or OpenStack Image (glance) services for managing nodes. For more information about this scenario, see Chapter 9, Configuring a basic overcloud with pre-provisioned nodes . Important You cannot combine pre-provisioned nodes with director-provisioned nodes.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/director_installation_and_usage/assembly_planning-your-overcloud
Configuring authentication and authorization in RHEL
Configuring authentication and authorization in RHEL Red Hat Enterprise Linux 9 Using SSSD, authselect, and sssctl to configure authentication and authorization Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_authentication_and_authorization_in_rhel/index
Chapter 17. Optimizing virtual machine performance
Chapter 17. Optimizing virtual machine performance Virtual machines (VMs) always experience some degree of performance deterioration in comparison to the host. The following sections explain the reasons for this deterioration and provide instructions on how to minimize the performance impact of virtualization in RHEL 8, so that your hardware infrastructure resources can be used as efficiently as possible. 17.1. What influences virtual machine performance VMs are run as user-space processes on the host. The hypervisor therefore needs to convert the host's system resources so that the VMs can use them. As a consequence, a portion of the resources is consumed by the conversion, and the VM therefore cannot achieve the same performance efficiency as the host. The impact of virtualization on system performance More specific reasons for VM performance loss include: Virtual CPUs (vCPUs) are implemented as threads on the host, handled by the Linux scheduler. VMs do not automatically inherit optimization features, such as NUMA or huge pages, from the host kernel. Disk and network I/O settings of the host might have a significant performance impact on the VM. Network traffic typically travels to a VM through a software-based bridge. Depending on the host devices and their models, there might be significant overhead due to emulation of particular hardware. The severity of the virtualization impact on the VM performance is influenced by a variety factors, which include: The number of concurrently running VMs. The amount of virtual devices used by each VM. The device types used by the VMs. Reducing VM performance loss RHEL 8 provides a number of features you can use to reduce the negative performance effects of virtualization. Notably: The TuneD service can automatically optimize the resource distribution and performance of your VMs. Block I/O tuning can improve the performances of the VM's block devices, such as disks. NUMA tuning can increase vCPU performance. Virtual networking can be optimized in various ways. Important Tuning VM performance can have negative effects on other virtualization functions. For example, it can make migrating the modified VM more difficult. 17.2. Optimizing virtual machine performance by using TuneD The TuneD utility is a tuning profile delivery mechanism that adapts RHEL for certain workload characteristics, such as requirements for CPU-intensive tasks or storage-network throughput responsiveness. It provides a number of tuning profiles that are pre-configured to enhance performance and reduce power consumption in a number of specific use cases. You can edit these profiles or create new profiles to create performance solutions tailored to your environment, including virtualized environments. To optimize RHEL 8 for virtualization, use the following profiles: For RHEL 8 virtual machines, use the virtual-guest profile. It is based on the generally applicable throughput-performance profile, but also decreases the swappiness of virtual memory. For RHEL 8 virtualization hosts, use the virtual-host profile. This enables more aggressive writeback of dirty memory pages, which benefits the host performance. Prerequisites The TuneD service is installed and enabled . Procedure To enable a specific TuneD profile: List the available TuneD profiles. Optional: Create a new TuneD profile or edit an existing TuneD profile. For more information, see Customizing TuneD profiles . Activate a TuneD profile. To optimize a virtualization host, use the virtual-host profile. On a RHEL guest operating system, use the virtual-guest profile. Verification Display the active profile for TuneD . Ensure that the TuneD profile settings have been applied on your system. Additional resources Monitoring and managing system status and performance 17.3. Virtual machine performance optimization for specific workloads Virtual machines (VMs) are frequently dedicated to perform a specific workload. You can improve the performance of your VMs by optimizing their configuration for the intended workload. Table 17.1. Recommended VM configurations for specific use cases Use case IOThread vCPU pinning vNUMA pinning huge pages multi-queue Database For database disks Yes * Yes * Yes * Yes, see: multi-queue virtio-blk, virtio-scsi Virtualized Network Function (VNF) No Yes Yes Yes Yes, see: multi-queue virtio-net High Performance Computing (HPC) No Yes Yes Yes No Backup Server For backup disks No No No Yes, see: multi-queue virtio-blk, virtio-scsi VM with many CPUs (Usually more than 32) No Yes * Yes * No No VM with large RAM (Usually more than 128 GB) No No Yes * Yes No * If the VM has enough CPUs and RAM to use more than one NUMA node. Note A VM can fit in more than one category of use cases. In this situation, you should apply all of the recommended configurations. 17.4. Configuring virtual machine memory To improve the performance of a virtual machine (VM), you can assign additional host RAM to the VM. Similarly, you can decrease the amount of memory allocated to a VM so the host memory can be allocated to other VMs or tasks. To perform these actions, you can use the web console or the command line . 17.4.1. Memory overcommitment Virtual machines (VMs) running on a KVM hypervisor do not have dedicated blocks of physical RAM assigned to them. Instead, each VM functions as a Linux process where the host's Linux kernel allocates memory only when requested. In addition, the host's memory manager can move the VM's memory between its own physical memory and swap space. If memory overcommitment is enabled, the kernel can decide to allocate less physical memory than is requested by a VM, because often the requested amount of memory is not fully used by the VM's process. By default, memory overcommitment is enabled in the Linux kernel and the kernel estimates the safe amount of memory overcommitment for VM's requests. However, the system can still become unstable with frequent overcommitment for memory-intensive workloads. Memory overcommitment requires you to allocate sufficient swap space on the host physical machine to accommodate all VMs as well as enough memory for the host physical machine's processes. For instructions on the basic recommended swap space size, see: What is the recommended swap size for Red Hat platforms? Recommended methods to deal with memory shortages on the host: Allocate less memory per VM. Add more physical memory to the host. Use larger swap space. Important A VM will run slower if it is swapped frequently. In addition, overcommitting can cause the system to run out of memory (OOM), which may lead to the Linux kernel shutting down important system processes. Memory overcommit is not supported with device assignment. This is because when device assignment is in use, all virtual machine memory must be statically pre-allocated to enable direct memory access (DMA) with the assigned device. Additional resources Virtual memory parameters 17.4.2. Adding and removing virtual machine memory by using the web console To improve the performance of a virtual machine (VM) or to free up the host resources it is using, you can use the web console to adjust amount of memory allocated to the VM. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The guest OS is running the memory balloon drivers. To verify this is the case: Ensure the VM's configuration includes the memballoon device: If this commands displays any output and the model is not set to none , the memballoon device is present. Ensure the balloon drivers are running in the guest OS. In Windows guests, the drivers are installed as a part of the virtio-win driver package. For instructions, see Installing KVM paravirtualized drivers for Windows virtual machines . In Linux guests, the drivers are generally included by default and activate when the memballoon device is present. The web console VM plug-in is installed on your system . Procedure Optional: Obtain the information about the maximum memory and currently used memory for a VM. This will serve as a baseline for your changes, and also for verification. Log in to the RHEL 8 web console. For details, see Logging in to the web console . In the Virtual Machines interface, click the VM whose information you want to see. A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM's graphical interface. Click edit to the Memory line in the Overview pane. The Memory Adjustment dialog appears. Configure the virtual memory for the selected VM. Maximum allocation - Sets the maximum amount of host memory that the VM can use for its processes. You can specify the maximum memory when creating the VM or increase it later. You can specify memory as multiples of MiB or GiB. Adjusting maximum memory allocation is only possible on a shut-off VM. Current allocation - Sets the actual amount of memory allocated to the VM. This value can be less than the Maximum allocation but cannot exceed it. You can adjust the value to regulate the memory available to the VM for its processes. You can specify memory as multiples of MiB or GiB. If you do not specify this value, the default allocation is the Maximum allocation value. Click Save . The memory allocation of the VM is adjusted. Additional resources Adding and removing virtual machine memory by using the command line Optimizing virtual machine CPU performance 17.4.3. Adding and removing virtual machine memory by using the command line To improve the performance of a virtual machine (VM) or to free up the host resources it is using, you can use the CLI to adjust amount of memory allocated to the VM. Prerequisites The guest OS is running the memory balloon drivers. To verify this is the case: Ensure the VM's configuration includes the memballoon device: If this commands displays any output and the model is not set to none , the memballoon device is present. Ensure the ballon drivers are running in the guest OS. In Windows guests, the drivers are installed as a part of the virtio-win driver package. For instructions, see Installing KVM paravirtualized drivers for Windows virtual machines . In Linux guests, the drivers are generally included by default and activate when the memballoon device is present. Procedure Optional: Obtain the information about the maximum memory and currently used memory for a VM. This will serve as a baseline for your changes, and also for verification. Adjust the maximum memory allocated to a VM. Increasing this value improves the performance potential of the VM, and reducing the value lowers the performance footprint the VM has on your host. Note that this change can only be performed on a shut-off VM, so adjusting a running VM requires a reboot to take effect. For example, to change the maximum memory that the testguest VM can use to 4096 MiB: To increase the maximum memory of a running VM, you can attach a memory device to the VM. This is also referred to as memory hot plug . For details, see Attaching devices to virtual machines . Warning Removing memory devices from a running VM (also referred as a memory hot unplug) is not supported, and highly discouraged by Red Hat. Optional: You can also adjust the memory currently used by the VM, up to the maximum allocation. This regulates the memory load that the VM has on the host until the reboot, without changing the maximum VM allocation. Verification Confirm that the memory used by the VM has been updated: Optional: If you adjusted the current VM memory, you can obtain the memory balloon statistics of the VM to evaluate how effectively it regulates its memory use. Additional resources Adding and removing virtual machine memory by using the web console Optimizing virtual machine CPU performance 17.4.4. Configuring virtual machines to use huge pages In certain use cases, you can improve memory allocation for your virtual machines (VMs) by using huge pages instead of the default 4 KiB memory pages. For example, huge pages can improve performance for VMs with high memory utilization, such as database servers. Prerequisites The host is configured to use huge pages in memory allocation. For instructions, see: Configuring HugeTLB at boot time Procedure Shut down the selected VM if it is running. To configure a VM to use 1 GiB huge pages, open the XML definition of a VM for editing. For example, to edit a testguest VM, run the following command: Add the following lines to the <memoryBacking> section in the XML definition: <memoryBacking> <hugepages> <page size='1' unit='GiB'/> </hugepages> </memoryBacking> Verification Start the VM. Confirm that the host has successfully allocated huge pages for the running VM. On the host, run the following command: When you add together the number of free and reserved huge pages ( HugePages_Free + HugePages_Rsvd ), the result should be less than the total number of huge pages ( HugePages_Total ). The difference is the number of huge pages that is used by the running VM. Additional resources Configuring huge pages 17.4.5. Additional resources Attaching devices to virtual machines . 17.5. Optimizing virtual machine I/O performance The input and output (I/O) capabilities of a virtual machine (VM) can significantly limit the VM's overall efficiency. To address this, you can optimize a VM's I/O by configuring block I/O parameters. 17.5.1. Tuning block I/O in virtual machines When multiple block devices are being used by one or more VMs, it might be important to adjust the I/O priority of specific virtual devices by modifying their I/O weights . Increasing the I/O weight of a device increases its priority for I/O bandwidth, and therefore provides it with more host resources. Similarly, reducing a device's weight makes it consume less host resources. Note Each device's weight value must be within the 100 to 1000 range. Alternatively, the value can be 0 , which removes that device from per-device listings. Procedure To display and set a VM's block I/O parameters: Display the current <blkio> parameters for a VM: # virsh dumpxml VM-name <domain> [...] <blkiotune> <weight>800</weight> <device> <path>/dev/sda</path> <weight>1000</weight> </device> <device> <path>/dev/sdb</path> <weight>500</weight> </device> </blkiotune> [...] </domain> Edit the I/O weight of a specified device: For example, the following changes the weight of the /dev/sda device in the testguest1 VM to 500. Verification Check that the VM's block I/O parameters have been configured correctly. Important Certain kernels do not support setting I/O weights for specific devices. If the step does not display the weights as expected, it is likely that this feature is not compatible with your host kernel. 17.5.2. Disk I/O throttling in virtual machines When several VMs are running simultaneously, they can interfere with system performance by using excessive disk I/O. Disk I/O throttling in KVM virtualization provides the ability to set a limit on disk I/O requests sent from the VMs to the host machine. This can prevent a VM from over-utilizing shared resources and impacting the performance of other VMs. To enable disk I/O throttling, set a limit on disk I/O requests sent from each block device attached to VMs to the host machine. Procedure Use the virsh domblklist command to list the names of all the disk devices on a specified VM. Find the host block device where the virtual disk that you want to throttle is mounted. For example, if you want to throttle the sdb virtual disk from the step, the following output shows that the disk is mounted on the /dev/nvme0n1p3 partition. Set I/O limits for the block device by using the virsh blkiotune command. The following example throttles the sdb disk on the rollin-coal VM to 1000 read and write I/O operations per second and to 50 MB per second read and write throughput. Additional information Disk I/O throttling can be useful in various situations, for example when VMs belonging to different customers are running on the same host, or when quality of service guarantees are given for different VMs. Disk I/O throttling can also be used to simulate slower disks. I/O throttling can be applied independently to each block device attached to a VM and supports limits on throughput and I/O operations. Red Hat does not support using the virsh blkdeviotune command to configure I/O throttling in VMs. For more information about unsupported features when using RHEL 8 as a VM host, see Unsupported features in RHEL 8 virtualization . 17.5.3. Enabling multi-queue on storage devices When using virtio-blk or virtio-scsi storage devices in your virtual machines (VMs), the multi-queue feature provides improved storage performance and scalability. It enables each virtual CPU (vCPU) to have a separate queue and interrupt to use without affecting other vCPUs. The multi-queue feature is enabled by default for the Q35 machine type, however you must enable it manually on the i440FX machine type. You can tune the number of queues to be optimal for your workload, however the optimal number differs for each type of workload and you must test which number of queues works best in your case. Procedure To enable multi-queue on a storage device, edit the XML configuration of the VM. In the XML configuration, find the intended storage device and change the queues parameter to use multiple I/O queues. Replace N with the number of vCPUs in the VM, up to 16. A virtio-blk example: <disk type='block' device='disk'> <driver name='qemu' type='raw' queues='N' /> <source dev='/dev/sda'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> A virtio-scsi example: <controller type='scsi' index='0' model='virtio-scsi'> <driver queues='N' /> </controller> Restart the VM for the changes to take effect. 17.5.4. Configuring dedicated IOThreads To improve the Input/Output (IO) performance of a disk on your virtual machine (VM), you can configure a dedicated IOThread that is used to manage the IO operations of the VM's disk. Normally, the IO operations of a disk are a part of the main QEMU thread, which can decrease the responsiveness of the VM as a whole during intensive IO workloads. By separating the IO operations to a dedicated IOThread , you can significantly increase the responsiveness and performance of your VM. Procedure Shut down the selected VM if it is running. On the host, add or edit the <iothreads> tag in the XML configuration of the VM. For example, to create a single IOThread for a testguest1 VM: Note For optimal results, use only 1-2 IOThreads per CPU on the host. Assign a dedicated IOThread` to a VM disk. For example, to assign an IOThread with ID of 1 to a disk on the testguest1 VM: Note IOThread IDs start from 1 and you must dedicate only a single IOThread to a disk. Usually, a one dedicated IOThread per VM is sufficient for optimal performance. When using virtio-scsi storage devices, assign a dedicated IOThread` to the virtio-scsi controller. For example, to assign an IOThread with ID of 1 to a controller on the testguest1 VM: Verification Evaluate the impact of your changes on your VM performance. For details, see: Virtual machine performance monitoring tools 17.5.5. Configuring virtual disk caching KVM provides several virtual disk caching modes. For intensive Input/Output (IO) workloads, selecting the optimal caching mode can significantly increase the virtual machine (VM) performance. Virtual disk cache modes overview writethrough Host page cache is used for reading only. Writes are reported as completed only when the data has been committed to the storage device. The sustained IO performance is decreased but this mode has good write guarantees. writeback Host page cache is used for both reading and writing. Writes are reported as complete when data reaches the host's memory cache, not physical storage. This mode has faster IO performance than writethrough but it is possible to lose data on host failure. none Host page cache is bypassed entirely. This mode relies directly on the write queue of the physical disk, so it has a predictable sustained IO performance and offers good write guarantees on a stable guest. It is also a safe cache mode for VM live migration. Procedure Shut down the selected VM if it is running. Edit the XML configuration of the selected VM. Find the disk device and edit the cache option in the driver tag. <domain type='kvm'> <name>testguest1</name> ... <devices> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none' io='native' iothread='1'/> <source file='/var/lib/libvirt/images/test-disk.raw'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> ... </devices> ... </domain> 17.6. Optimizing virtual machine CPU performance Much like physical CPUs in host machines, vCPUs are critical to virtual machine (VM) performance. As a result, optimizing vCPUs can have a significant impact on the resource efficiency of your VMs. To optimize your vCPU: Adjust how many host CPUs are assigned to the VM. You can do this using the CLI or the web console . Ensure that the vCPU model is aligned with the CPU model of the host. For example, to set the testguest1 VM to use the CPU model of the host: Deactivate kernel same-page merging (KSM) . If your host machine uses Non-Uniform Memory Access (NUMA), you can also configure NUMA for its VMs. This maps the host's CPU and memory processes onto the CPU and memory processes of the VM as closely as possible. In effect, NUMA tuning provides the vCPU with a more streamlined access to the system memory allocated to the VM, which can improve the vCPU processing effectiveness. For details, see Configuring NUMA in a virtual machine and Virtual machine performance optimization for specific workloads . 17.6.1. vCPU overcommitment vCPU overcommitment allows you to have a setup where the sum of all vCPUs in virtual machines (VMs) running on a host exceeds the number of physical CPUs on the host. However, you might experience performance deterioration when simultaneously running more cores in your VMs than are physically available on the host. For best performance, assign VMs with only as many vCPUs as are required to run the intended workloads in each VM. vCPU overcommitment recommendations: Assign the minimum amount of vCPUs required by by the VM's workloads for best performance. Avoid overcommitting vCPUs in production without extensive testing. If overcomitting vCPUs, the safe ratio is typically 5 vCPUs to 1 physical CPU for loads under 100%. It is not recommended to have more than 10 total allocated vCPUs per physical processor core. Monitor CPU usage to prevent performance degradation under heavy loads. Important Applications that use 100% of memory or processing resources may become unstable in overcommitted environments. Do not overcommit memory or CPUs in a production environment without extensive testing, as the CPU overcommit ratio is workload-dependent. 17.6.2. Adding and removing virtual CPUs by using the command line To increase or optimize the CPU performance of a virtual machine (VM), you can add or remove virtual CPUs (vCPUs) assigned to the VM. When performed on a running VM, this is also referred to as vCPU hot plugging and hot unplugging. However, note that vCPU hot unplug is not supported in RHEL 8, and Red Hat highly discourages its use. Prerequisites Optional: View the current state of the vCPUs in the targeted VM. For example, to display the number of vCPUs on the testguest VM: This output indicates that testguest is currently using 1 vCPU, and 1 more vCPu can be hot plugged to it to increase the VM's performance. However, after reboot, the number of vCPUs testguest uses will change to 2, and it will be possible to hot plug 2 more vCPUs. Procedure Adjust the maximum number of vCPUs that can be attached to a VM, which takes effect on the VM's boot. For example, to increase the maximum vCPU count for the testguest VM to 8: Note that the maximum may be limited by the CPU topology, host hardware, the hypervisor, and other factors. Adjust the current number of vCPUs attached to a VM, up to the maximum configured in the step. For example: To increase the number of vCPUs attached to the running testguest VM to 4: This increases the VM's performance and host load footprint of testguest until the VM's boot. To permanently decrease the number of vCPUs attached to the testguest VM to 1: This decreases the VM's performance and host load footprint of testguest after the VM's boot. However, if needed, additional vCPUs can be hot plugged to the VM to temporarily increase its performance. Verification Confirm that the current state of vCPU for the VM reflects your changes. Additional resources Managing virtual CPUs by using the web console 17.6.3. Managing virtual CPUs by using the web console By using the RHEL 8 web console, you can review and configure virtual CPUs used by virtual machines (VMs) to which the web console is connected. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The web console VM plug-in is installed on your system . Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . In the Virtual Machines interface, click the VM whose information you want to see. A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM's graphical interface. Click edit to the number of vCPUs in the Overview pane. The vCPU details dialog appears. Configure the virtual CPUs for the selected VM. vCPU Count - The number of vCPUs currently in use. Note The vCPU count cannot be greater than the vCPU Maximum. vCPU Maximum - The maximum number of virtual CPUs that can be configured for the VM. If this value is higher than the vCPU Count , additional vCPUs can be attached to the VM. Sockets - The number of sockets to expose to the VM. Cores per socket - The number of cores for each socket to expose to the VM. Threads per core - The number of threads for each core to expose to the VM. Note that the Sockets , Cores per socket , and Threads per core options adjust the CPU topology of the VM. This may be beneficial for vCPU performance and may impact the functionality of certain software in the guest OS. If a different setting is not required by your deployment, keep the default values. Click Apply . The virtual CPUs for the VM are configured. Note Changes to virtual CPU settings only take effect after the VM is restarted. Additional resources Adding and removing virtual CPUs by using the command line 17.6.4. Configuring NUMA in a virtual machine The following methods can be used to configure Non-Uniform Memory Access (NUMA) settings of a virtual machine (VM) on a RHEL 8 host. For ease of use, you can set up a VM's NUMA configuration by using automated utilities and services. However, manual NUMA setup is more likely to yield a significant performance improvement. Prerequisites The host is a NUMA-compatible machine. To detect whether this is the case, use the virsh nodeinfo command and see the NUMA cell(s) line: If the value of the line is 2 or greater, the host is NUMA-compatible. Optional: You have the numactl package installed on the host. Procedure Automatic methods Set the VM's NUMA policy to Preferred . For example, to configure the testguest5 VM: Use the numad service to automatically align the VM CPU with memory resources. Start the numad service to automatically align the VM CPU with memory resources. Manual methods To manually tune NUMA settings, you can specify which host NUMA nodes will be assigned specifically to a certain VM. This can improve the host memory usage by the VM's vCPU. Optional: Use the numactl command to view the NUMA topology on the host: Edit the XML configuration of a VM to assign CPU and memory resources to specific NUMA nodes. For example, the following configuration sets testguest6 to use vCPUs 0-7 on NUMA node 0 and vCPUS 8-15 on NUMA node 1 . Both nodes are also assigned 16 GiB of VM's memory. If the VM is running, restart it to apply the configuration. Note For best performance results, it is recommended to respect the maximum memory size for each NUMA node on the host. Known issues NUMA tuning currently cannot be performed on IBM Z hosts Additional resources Virtual machine performance optimization for specific workloads Virtual machine performance optimization for specific workloads using the numastat utility 17.6.5. Configuring virtual CPU pinning To improve the CPU performance of a virtual machine (VM), you can pin a virtual CPU (vCPU) to a specific physical CPU thread on the host. This ensures that the vCPU will have its own dedicated physical CPU thread, which can significantly improve the vCPU performance. To further optimize the CPU performance, you can also pin QEMU process threads associated with a specified VM to a specific host CPU. Procedure Check the CPU topology on the host: In this example, the output contains NUMA nodes and the available physical CPU threads on the host. Check the number of vCPU threads inside the VM: In this example, the output contains NUMA nodes and the available vCPU threads inside the VM. Pin specific vCPU threads from a VM to a specific host CPU or range of CPUs. This is recommended as a safe method of vCPU performance improvement. For example, the following commands pin vCPU threads 0 to 3 of the testguest6 VM to host CPUs 1, 3, 5, 7, respectively: Optional: Verify whether the vCPU threads are successfully pinned to CPUs. After pinning vCPU threads, you can also pin QEMU process threads associated with a specified VM to a specific host CPU or range of CPUs. This can further help the QEMU process to run more efficiently on the physical CPU. For example, the following commands pin the QEMU process thread of testguest6 to CPUs 2 and 4, and verify this was successful: 17.6.6. Configuring virtual CPU capping You can use virtual CPU (vCPU) capping to limit the amount of CPU resources a virtual machine (VM) can use. vCPU capping can improve the overall performance by preventing excessive use of host's CPU resources by a single VM and by making it easier for the hypervisor to manage CPU scheduling. Procedure View the current vCPU scheduling configuration on the host. To configure an absolute vCPU cap for a VM, set the vcpu_period and vcpu_quota parameters. Both parameters use a numerical value that represents a time duration in microseconds. Set the vcpu_period parameter by using the virsh schedinfo command. For example: In this example, the vcpu_period is set to 100,000 microseconds, which means the scheduler enforces vCPU capping during this time interval. You can also use the --live --config options to configure a running VM without restarting it. Set the vcpu_quota parameter by using the virsh schedinfo command. For example: In this example, the vcpu_quota is set to 50,000 microseconds, which specifies the maximum amount of CPU time that the VM can use during the vcpu_period time interval. In this case, vcpu_quota is set as the half of vcpu_period , so the VM can use up to 50% of the CPU time during that interval. You can also use the --live --config options to configure a running VM without restarting it. Verification Check that the vCPU scheduling parameters have the correct values. 17.6.7. Tuning CPU weights The CPU weight (or CPU shares ) setting controls how much CPU time a virtual machine (VM) receives compared to other running VMs. By increasing the CPU weight of a specific VM, you can ensure that this VM gets more CPU time relative to other VMs. To prioritize CPU time allocation between multiple VMs, set the cpu_shares parameter The possible CPU weight values range from 0 to 262144 and the default value for a new KVM VM is 1024 . Procedure Check the current CPU weight of a VM. Adjust the CPU weight to a preferred value. In this example, cpu_shares is set to 2048. This means that if all other VMs have the value set to 1024, this VM gets approximately twice the amount of CPU time. You can also use the --live --config options to configure a running VM without restarting it. 17.6.8. Disabling kernel same-page merging Kernel Same-Page Merging (KSM) improves memory density by sharing identical memory pages between virtual machines (VMs). However, using KSM increases CPU utilization, and might negatively affect overall performance depending on the workload. In RHEL 8, KSM is enabled by default. Therefore, if the CPU performance in your VM deployment is sub-optimal, you can improve this by disabling KSM. Prerequisites Root access to your host system. Procedure Monitor the performance and resource consumption of VMs on your host to evaluate the benefits of KSM. Specifically, ensure that the additional CPU usage by KSM does not offset the memory improvements and does not cause additional performance issues. In latency-sensitive workloads, also pay attention to cross-NUMA page merges. Optional: If KSM has not improved your VM performance, disable it: To disable KSM for a single session, use the systemctl utility to stop ksm and ksmtuned services. To disable KSM persistently, use the systemctl utility to disable ksm and ksmtuned services. Note Memory pages shared between VMs before deactivating KSM will remain shared. To stop sharing, delete all the PageKSM pages in the system by using the following command: However, this command increases memory usage, and might cause performance problems on your host or your VMs. Verification Monitor the performance and resource consumption of VMs on your host to evaluate the benefits of deactivating KSM. For instructions, see Virtual machine performance monitoring tools . 17.7. Optimizing virtual machine network performance Due to the virtual nature of a VM's network interface controller (NIC), the VM loses a portion of its allocated host network bandwidth, which can reduce the overall workload efficiency of the VM. The following tips can minimize the negative impact of virtualization on the virtual NIC (vNIC) throughput. Procedure Use any of the following methods and observe if it has a beneficial effect on your VM network performance: Enable the vhost_net module On the host, ensure the vhost_net kernel feature is enabled: If the output of this command is blank, enable the vhost_net kernel module: Set up multi-queue virtio-net To set up the multi-queue virtio-net feature for a VM, use the virsh edit command to edit to the XML configuration of the VM. In the XML, add the following to the <devices> section, and replace N with the number of vCPUs in the VM, up to 16: If the VM is running, restart it for the changes to take effect. Batching network packets In Linux VM configurations with a long transmission path, batching packets before submitting them to the kernel may improve cache utilization. To set up packet batching, use the following command on the host, and replace tap0 with the name of the network interface that the VMs use: SR-IOV If your host NIC supports SR-IOV, use SR-IOV device assignment for your vNICs. For more information, see Managing SR-IOV devices . Additional resources Understanding virtual networking 17.8. Virtual machine performance monitoring tools To identify what consumes the most VM resources and which aspect of VM performance needs optimization, performance diagnostic tools, both general and VM-specific, can be used. Default OS performance monitoring tools For standard performance evaluation, you can use the utilities provided by default by your host and guest operating systems: On your RHEL 8 host, as root, use the top utility or the system monitor application, and look for qemu and virt in the output. This shows how much host system resources your VMs are consuming. If the monitoring tool displays that any of the qemu or virt processes consume a large portion of the host CPU or memory capacity, use the perf utility to investigate. For details, see below. In addition, if a vhost_net thread process, named for example vhost_net-1234 , is displayed as consuming an excessive amount of host CPU capacity, consider using virtual network optimization features , such as multi-queue virtio-net . On the guest operating system, use performance utilities and applications available on the system to evaluate which processes consume the most system resources. On Linux systems, you can use the top utility. On Windows systems, you can use the Task Manager application. perf kvm You can use the perf utility to collect and analyze virtualization-specific statistics about the performance of your RHEL 8 host. To do so: On the host, install the perf package: Use one of the perf kvm stat commands to display perf statistics for your virtualization host: For real-time monitoring of your hypervisor, use the perf kvm stat live command. To log the perf data of your hypervisor over a period of time, activate the logging by using the perf kvm stat record command. After the command is canceled or interrupted, the data is saved in the perf.data.guest file, which can be analyzed by using the perf kvm stat report command. Analyze the perf output for types of VM-EXIT events and their distribution. For example, the PAUSE_INSTRUCTION events should be infrequent, but in the following output, the high occurrence of this event suggests that the host CPUs are not handling the running vCPUs well. In such a scenario, consider shutting down some of your active VMs, removing vCPUs from these VMs, or tuning the performance of the vCPUs . Other event types that can signal problems in the output of perf kvm stat include: INSN_EMULATION - suggests suboptimal VM I/O configuration . For more information about using perf to monitor virtualization performance, see the perf-kvm man page on your system. numastat To see the current NUMA configuration of your system, you can use the numastat utility, which is provided by installing the numactl package. The following shows a host with 4 running VMs, each obtaining memory from multiple NUMA nodes. This is not optimal for vCPU performance, and warrants adjusting : In contrast, the following shows memory being provided to each VM by a single node, which is significantly more efficient. 17.9. Additional resources Optimizing Windows virtual machines
[ "tuned-adm list Available profiles: - balanced - General non-specialized TuneD profile - desktop - Optimize for the desktop use-case [...] - virtual-guest - Optimize for running inside a virtual guest - virtual-host - Optimize for running KVM guests Current active profile: balanced", "tuned-adm profile selected-profile", "tuned-adm profile virtual-host", "tuned-adm profile virtual-guest", "tuned-adm active Current active profile: virtual-host", "tuned-adm verify Verification succeeded, current system settings match the preset profile. See tuned log file ('/var/log/tuned/tuned.log') for details.", "virsh dumpxml testguest | grep memballoon <memballoon model='virtio'> </memballoon>", "virsh dominfo testguest Max memory: 2097152 KiB Used memory: 2097152 KiB", "virsh dumpxml testguest | grep memballoon <memballoon model='virtio'> </memballoon>", "virsh dominfo testguest Max memory: 2097152 KiB Used memory: 2097152 KiB", "virt-xml testguest --edit --memory memory=4096,currentMemory=4096 Domain 'testguest' defined successfully. Changes will take effect after the domain is fully powered off.", "virsh setmem testguest --current 2048", "virsh dominfo testguest Max memory: 4194304 KiB Used memory: 2097152 KiB", "virsh domstats --balloon testguest Domain: 'testguest' balloon.current=365624 balloon.maximum=4194304 balloon.swap_in=0 balloon.swap_out=0 balloon.major_fault=306 balloon.minor_fault=156117 balloon.unused=3834448 balloon.available=4035008 balloon.usable=3746340 balloon.last-update=1587971682 balloon.disk_caches=75444 balloon.hugetlb_pgalloc=0 balloon.hugetlb_pgfail=0 balloon.rss=1005456", "virsh edit testguest", "<memoryBacking> <hugepages> <page size='1' unit='GiB'/> </hugepages> </memoryBacking>", "cat /proc/meminfo | grep Huge HugePages_Total: 4 HugePages_Free: 2 HugePages_Rsvd: 1 Hugepagesize: 1024000 kB", "<domain> [...] <blkiotune> <weight>800</weight> <device> <path>/dev/sda</path> <weight>1000</weight> </device> <device> <path>/dev/sdb</path> <weight>500</weight> </device> </blkiotune> [...] </domain>", "virsh blkiotune VM-name --device-weights device , I/O-weight", "virsh blkiotune testguest1 --device-weights /dev/sda, 500", "virsh blkiotune testguest1 Block I/O tuning parameters for domain testguest1: weight : 800 device_weight : [ {\"sda\": 500}, ]", "virsh domblklist rollin-coal Target Source ------------------------------------------------ vda /var/lib/libvirt/images/rollin-coal.qcow2 sda - sdb /home/horridly-demanding-processes.iso", "lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT zram0 252:0 0 4G 0 disk [SWAP] nvme0n1 259:0 0 238.5G 0 disk ├─nvme0n1p1 259:1 0 600M 0 part /boot/efi ├─nvme0n1p2 259:2 0 1G 0 part /boot └─nvme0n1p3 259:3 0 236.9G 0 part └─luks-a1123911-6f37-463c-b4eb-fxzy1ac12fea 253:0 0 236.9G 0 crypt /home", "virsh blkiotune VM-name --parameter device , limit", "virsh blkiotune rollin-coal --device-read-iops-sec /dev/nvme0n1p3,1000 --device-write-iops-sec /dev/nvme0n1p3,1000 --device-write-bytes-sec /dev/nvme0n1p3,52428800 --device-read-bytes-sec /dev/nvme0n1p3,52428800", "virsh edit <example_vm>", "<disk type='block' device='disk'> <driver name='qemu' type='raw' queues='N' /> <source dev='/dev/sda'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk>", "<controller type='scsi' index='0' model='virtio-scsi'> <driver queues='N' /> </controller>", "virsh edit <testguest1> <domain type='kvm'> <name>testguest1</name> <vcpu placement='static'>8</vcpu> <iothreads>1</iothreads> </domain>", "virsh edit <testguest1> <domain type='kvm'> <name>testguest1</name> <devices> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none' io='native' iothread='1' /> <source file='/var/lib/libvirt/images/test-disk.raw'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> </devices> </domain>", "virsh edit <testguest1> <domain type='kvm'> <name>testguest1</name> <devices> <controller type='scsi' index='0' model='virtio-scsi'> <driver iothread='1' /> <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/> </controller> </devices> </domain>", "virsh edit <vm_name>", "<domain type='kvm'> <name>testguest1</name> <devices> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none' io='native' iothread='1'/> <source file='/var/lib/libvirt/images/test-disk.raw'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> </devices> </domain>", "virt-xml testguest1 --edit --cpu host-model", "virsh vcpucount testguest maximum config 4 maximum live 2 current config 2 current live 1", "virsh setvcpus testguest 8 --maximum --config", "virsh setvcpus testguest 4 --live", "virsh setvcpus testguest 1 --config", "virsh vcpucount testguest maximum config 8 maximum live 4 current config 1 current live 4", "virsh nodeinfo CPU model: x86_64 CPU(s): 48 CPU frequency: 1200 MHz CPU socket(s): 1 Core(s) per socket: 12 Thread(s) per core: 2 NUMA cell(s): 2 Memory size: 67012964 KiB", "yum install numactl", "virt-xml testguest5 --edit --vcpus placement=auto virt-xml testguest5 --edit --numatune mode=preferred", "echo 1 > /proc/sys/kernel/numa_balancing", "systemctl start numad", "numactl --hardware available: 2 nodes (0-1) node 0 size: 18156 MB node 0 free: 9053 MB node 1 size: 18180 MB node 1 free: 6853 MB node distances: node 0 1 0: 10 20 1: 20 10", "virsh edit <testguest6> <domain type='kvm'> <name>testguest6</name> <vcpu placement='static'>16</vcpu> <cpu ...> <numa> <cell id='0' cpus='0-7' memory='16' unit='GiB'/> <cell id='1' cpus='8-15' memory='16' unit='GiB'/> </numa> </domain>", "lscpu -p=node,cpu Node,CPU 0,0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 1,0 1,1 1,2 1,3 1,4 1,5 1,6 1,7", "lscpu -p=node,cpu Node,CPU 0,0 0,1 0,2 0,3", "virsh vcpupin testguest6 0 1 virsh vcpupin testguest6 1 3 virsh vcpupin testguest6 2 5 virsh vcpupin testguest6 3 7", "virsh vcpupin testguest6 VCPU CPU Affinity ---------------------- 0 1 1 3 2 5 3 7", "virsh emulatorpin testguest6 2,4 virsh emulatorpin testguest6 emulator: CPU Affinity ---------------------------------- *: 2,4", "virsh schedinfo <vm_name> Scheduler : posix cpu_shares : 0 vcpu_period : 0 vcpu_quota : 0 emulator_period: 0 emulator_quota : 0 global_period : 0 global_quota : 0 iothread_period: 0 iothread_quota : 0", "virsh schedinfo <vm_name> --set vcpu_period=100000", "virsh schedinfo <vm_name> --set vcpu_quota=50000", "virsh schedinfo <vm_name> Scheduler : posix cpu_shares : 2048 vcpu_period : 100000 vcpu_quota : 50000", "virsh schedinfo <vm_name> Scheduler : posix cpu_shares : 1024 vcpu_period : 0 vcpu_quota : 0 emulator_period: 0 emulator_quota : 0 global_period : 0 global_quota : 0 iothread_period: 0 iothread_quota : 0", "virsh schedinfo <vm_name> --set cpu_shares=2048 Scheduler : posix cpu_shares : 2048 vcpu_period : 0 vcpu_quota : 0 emulator_period: 0 emulator_quota : 0 global_period : 0 global_quota : 0 iothread_period: 0 iothread_quota : 0", "systemctl stop ksm systemctl stop ksmtuned", "systemctl disable ksm Removed /etc/systemd/system/multi-user.target.wants/ksm.service. systemctl disable ksmtuned Removed /etc/systemd/system/multi-user.target.wants/ksmtuned.service.", "echo 2 > /sys/kernel/mm/ksm/run", "lsmod | grep vhost vhost_net 32768 1 vhost 53248 1 vhost_net tap 24576 1 vhost_net tun 57344 6 vhost_net", "modprobe vhost_net", "<interface type='network'> <source network='default'/> <model type='virtio'/> <driver name='vhost' queues='N'/> </interface>", "ethtool -C tap0 rx-frames 64", "yum install perf", "perf kvm stat report Analyze events for all VMs, all VCPUs: VM-EXIT Samples Samples% Time% Min Time Max Time Avg time EXTERNAL_INTERRUPT 365634 31.59% 18.04% 0.42us 58780.59us 204.08us ( +- 0.99% ) MSR_WRITE 293428 25.35% 0.13% 0.59us 17873.02us 1.80us ( +- 4.63% ) PREEMPTION_TIMER 276162 23.86% 0.23% 0.51us 21396.03us 3.38us ( +- 5.19% ) PAUSE_INSTRUCTION 189375 16.36% 11.75% 0.72us 29655.25us 256.77us ( +- 0.70% ) HLT 20440 1.77% 69.83% 0.62us 79319.41us 14134.56us ( +- 0.79% ) VMCALL 12426 1.07% 0.03% 1.02us 5416.25us 8.77us ( +- 7.36% ) EXCEPTION_NMI 27 0.00% 0.00% 0.69us 1.34us 0.98us ( +- 3.50% ) EPT_MISCONFIG 5 0.00% 0.00% 5.15us 10.85us 7.88us ( +- 11.67% ) Total Samples:1157497, Total events handled time:413728274.66us.", "numastat -c qemu-kvm Per-node process memory usage (in MBs) PID Node 0 Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Node 7 Total --------------- ------ ------ ------ ------ ------ ------ ------ ------ ----- 51722 (qemu-kvm) 68 16 357 6936 2 3 147 598 8128 51747 (qemu-kvm) 245 11 5 18 5172 2532 1 92 8076 53736 (qemu-kvm) 62 432 1661 506 4851 136 22 445 8116 53773 (qemu-kvm) 1393 3 1 2 12 0 0 6702 8114 --------------- ------ ------ ------ ------ ------ ------ ------ ------ ----- Total 1769 463 2024 7462 10037 2672 169 7837 32434", "numastat -c qemu-kvm Per-node process memory usage (in MBs) PID Node 0 Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Node 7 Total --------------- ------ ------ ------ ------ ------ ------ ------ ------ ----- 51747 (qemu-kvm) 0 0 7 0 8072 0 1 0 8080 53736 (qemu-kvm) 0 0 7 0 0 0 8113 0 8120 53773 (qemu-kvm) 0 0 7 0 0 0 1 8110 8118 59065 (qemu-kvm) 0 0 8050 0 0 0 0 0 8051 --------------- ------ ------ ------ ------ ------ ------ ------ ------ ----- Total 0 0 8072 0 8072 0 8114 8110 32368" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_virtualization/optimizing-virtual-machine-performance-in-rhel_configuring-and-managing-virtualization
8.3.3. Manual Pages for Services
8.3.3. Manual Pages for Services Manual pages for services contain valuable information, such as what file type to use for a given situation, and Booleans to change the access a service has (such as httpd accessing NFS volumes). This information may be in the standard manual page, or a manual page with selinux prepended or appended. For example, the httpd_selinux (8) manual page has information about what file type to use for a given situation, as well as Booleans to allow scripts, sharing files, accessing directories inside user home directories, and so on. Other manual pages with SELinux information for services include: Samba: the samba_selinux (8) manual page describes that files and directories to be exported via Samba must be labeled with the samba_share_t type, as well as Booleans to allow files labeled with types other than samba_share_t to be exported via Samba. Berkeley Internet Name Domain (BIND): the named (8) manual page describes what file type to use for a given situation (see the Red Hat SELinux BIND Security Profile section). The named_selinux (8) manual page describes that, by default, named cannot write to master zone files, and to allow such access, the named_write_master_zones Boolean must be enabled. The information in manual pages helps you configure the correct file types and Booleans, helping to prevent SELinux from denying access.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-fixing_problems-manual_pages_for_services
Chapter 1. About OpenShift sandboxed containers
Chapter 1. About OpenShift sandboxed containers OpenShift sandboxed containers for OpenShift Container Platform integrates Kata Containers as an optional runtime, providing enhanced security and isolation by running containerized applications in lightweight virtual machines. This integration provides a more secure runtime environment for sensitive workloads without significant changes to existing OpenShift workflows. This runtime supports containers in dedicated virtual machines (VMs), providing improved workload isolation. 1.1. Features OpenShift sandboxed containers provides the following features: Run privileged or untrusted workloads You can safely run workloads that require specific privileges, without the risk of compromising cluster nodes by running privileged containers. Workloads that require special privileges include the following: Workloads that require special capabilities from the kernel, beyond the default ones granted by standard container runtimes such as CRI-O, for example to access low-level networking features. Workloads that need elevated root privileges, for example to access a specific physical device. With OpenShift sandboxed containers, it is possible to pass only a specific device through to the virtual machines (VM), ensuring that the workload cannot access or misconfigure the rest of the system. Workloads for installing or using set-uid root binaries. These binaries grant special privileges and, as such, can present a security risk. With OpenShift sandboxed containers, additional privileges are restricted to the virtual machines, and grant no special access to the cluster nodes. Some workloads require privileges specifically for configuring the cluster nodes. Such workloads should still use privileged containers, because running on a virtual machine would prevent them from functioning. Ensure isolation for sensitive workloads The OpenShift sandboxed containers for Red Hat OpenShift Container Platform integrates Kata Containers as an optional runtime, providing enhanced security and isolation by running containerized applications in lightweight virtual machines. This integration provides a more secure runtime environment for sensitive workloads without significant changes to existing OpenShift workflows. This runtime supports containers in dedicated virtual machines (VMs), providing improved workload isolation. Ensure kernel isolation for each workload You can run workloads that require custom kernel tuning (such as sysctl , scheduler changes, or cache tuning) and the creation of custom kernel modules (such as out of tree or special arguments). Share the same workload across tenants You can run workloads that support many users (tenants) from different organizations sharing the same OpenShift Container Platform cluster. The system also supports running third-party workloads from multiple vendors, such as container network functions (CNFs) and enterprise applications. Third-party CNFs, for example, may not want their custom settings interfering with packet tuning or with sysctl variables set by other applications. Running inside a completely isolated kernel is helpful in preventing "noisy neighbor" configuration problems. Ensure proper isolation and sandboxing for testing software You can run containerized workloads with known vulnerabilities or handle issues in an existing application. This isolation enables administrators to give developers administrative control over pods, which is useful when the developer wants to test or validate configurations beyond those an administrator would typically grant. Administrators can, for example, safely and securely delegate kernel packet filtering (eBPF) to developers. eBPF requires CAP_ADMIN or CAP_BPF privileges, and is therefore not allowed under a standard CRI-O configuration, as this would grant access to every process on the Container Host worker node. Similarly, administrators can grant access to intrusive tools such as SystemTap , or support the loading of custom kernel modules during their development. Ensure default resource containment through VM boundaries By default, OpenShift sandboxed containers manages resources such as CPU, memory, storage, and networking in a robust and secure way. Since OpenShift sandboxed containers deploys on VMs, additional layers of isolation and security give a finer-grained access control to the resource. For example, an errant container will not be able to assign more memory than is available to the VM. Conversely, a container that needs dedicated access to a network card or to a disk can take complete control over that device without getting any access to other devices. 1.2. Compatibility with OpenShift Container Platform The required functionality for the OpenShift Container Platform platform is supported by two main components: Kata runtime: This includes Red Hat Enterprise Linux CoreOS (RHCOS) and updates with every OpenShift Container Platform release. OpenShift sandboxed containers Operator: Install the Operator using either the web console or OpenShift CLI ( oc ). The OpenShift sandboxed containers Operator is a Rolling Stream Operator , which means the latest version is the only supported version. It works with all currently supported versions of OpenShift Container Platform. For more information, see OpenShift Container Platform Life Cycle Policy for additional details. The Operator depends on the features that come with the RHCOS host and the environment it runs in. Note You must install Red Hat Enterprise Linux CoreOS (RHCOS) on the worker nodes. RHEL nodes are not supported. The following compatibility matrix for OpenShift sandboxed containers and OpenShift Container Platform releases identifies compatible features and environments. Table 1.1. Supported architectures Architecture OpenShift Container Platform version x86_64 4.8 or later s390x 4.14 or later There are two ways to deploy Kata containers runtime: Bare metal Peer pods Peer pods technology for the deployment of OpenShift sandboxed containers in public clouds was available as Developer Preview in OpenShift sandboxed containers 1.5 and OpenShift Container Platform 4.14. With the release of OpenShift sandboxed containers 1.7, the Operator requires OpenShift Container Platform version 4.15 or later. Table 1.2. Feature availability by OpenShift version Feature Deployment method OpenShift Container Platform 4.15 OpenShift Container Platform 4.16 Confidential Containers Bare metal Peer pods Technology Preview Technology Preview [1] GPU support [2] Bare metal Peer pods Developer Preview Developer Preview Technology Preview of Confidential Containers has been available since OpenShift sandboxed containers 1.7.0. GPU functionality is not available on IBM Z. Table 1.3. Supported cloud platforms for OpenShift sandboxed containers Platform GPU Confidential Containers AWS Cloud Computing Services Developer Preview Microsoft Azure Cloud Computing Services Developer Preview Technology Preview [1] Technology Preview of Confidential Containers has been available since OpenShift sandboxed containers 1.7.0. Additional resources Developer Preview Support Scope Technology Preview Features - Scope of Support 1.3. Node eligibility checks You can verify that your bare-metal cluster nodes support OpenShift sandboxed containers by running a node eligibility check. The most common reason for node ineligibility is lack of virtualization support. If you run sandboxed workloads on ineligible nodes, you will experience errors. High-level workflow Install the Node Feature Discovery Operator. Create the NodeFeatureDiscovery custom resource (CR). Enable node eligibility checks when you create the Kataconfig CR. You can run node eligibility checks on all worker nodes or on selected nodes. Additional resources Installing the Node Feature Discovery Operator 1.4. Common terms The following terms are used throughout the documentation. Sandbox A sandbox is an isolated environment where programs can run. In a sandbox, you can run untested or untrusted programs without risking harm to the host machine or the operating system. In the context of OpenShift sandboxed containers, sandboxing is achieved by running workloads in a different kernel using virtualization, providing enhanced control over the interactions between multiple workloads that run on the same host. Pod A pod is a construct that is inherited from Kubernetes and OpenShift Container Platform. It represents resources where containers can be deployed. Containers run inside of pods, and pods are used to specify resources that can be shared between multiple containers. In the context of OpenShift sandboxed containers, a pod is implemented as a virtual machine. Several containers can run in the same pod on the same virtual machine. OpenShift sandboxed containers Operator The OpenShift sandboxed containers Operator manages the lifecycle of sandboxed containers on a cluster. You can use the OpenShift sandboxed containers Operator to perform tasks such as the installation and removal of sandboxed containers, software updates, and status monitoring. Kata Containers Kata Containers is a core upstream project that is used to build OpenShift sandboxed containers. OpenShift sandboxed containers integrate Kata Containers with OpenShift Container Platform. KataConfig KataConfig objects represent configurations of sandboxed containers. They store information about the state of the cluster, such as the nodes on which the software is deployed. Runtime class A RuntimeClass object describes which runtime can be used to run a given workload. A runtime class that is named kata is installed and deployed by the OpenShift sandboxed containers Operator. The runtime class contains information about the runtime that describes resources that the runtime needs to operate, such as the pod overhead . Peer pod A peer pod in OpenShift sandboxed containers extends the concept of a standard pod. Unlike a standard sandboxed container, where the virtual machine is created on the worker node itself, in a peer pod, the virtual machine is created through a remote hypervisor using any supported hypervisor or cloud provider API. The peer pod acts as a regular pod on the worker node, with its corresponding VM running elsewhere. The remote location of the VM is transparent to the user and is specified by the runtime class in the pod specification. The peer pod design circumvents the need for nested virtualization. IBM Secure Execution IBM Secure Execution for Linux is an advanced security feature introduced with IBM z15(R) and LinuxONE III. This feature extends the protection provided by pervasive encryption. IBM Secure Execution safeguards data at rest, in transit, and in use. It enables secure deployment of workloads and ensures data protection throughout its lifecycle. For more information, see Introducing IBM Secure Execution for Linux . Confidential Containers Confidential Containers protects containers and data by verifying that your workload is running in a Trusted Execution Environment (TEE). You can deploy this feature to safeguard the privacy of big data analytics and machine learning inferences. Trustee is a component of Confidential Containers. Trustee is an attestation service that verifies the trustworthiness of the location where you plan to run your workload or where you plan to send confidential information. Trustee includes components deployed on a trusted side and used to verify whether the remote workload is running in a Trusted Execution Environment (TEE). Trustee is flexible and can be deployed in several different configurations to support a wide variety of applications and hardware platforms. Confidential compute attestation Operator The Confidential compute attestation Operator manages the installation, lifecycle, and configuration of Confidential Containers. 1.5. OpenShift sandboxed containers Operator The OpenShift sandboxed containers Operator encapsulates all of the components from Kata containers. It manages installation, lifecycle, and configuration tasks. The OpenShift sandboxed containers Operator is packaged in the Operator bundle format as two container images: The bundle image contains metadata and is required to make the operator OLM-ready. The second container image contains the actual controller that monitors and manages the KataConfig resource. The OpenShift sandboxed containers Operator is based on the Red Hat Enterprise Linux CoreOS (RHCOS) extensions concept. RHCOS extensions are a mechanism to install optional OpenShift Container Platform software. The OpenShift sandboxed containers Operator uses this mechanism to deploy sandboxed containers on a cluster. The sandboxed containers RHCOS extension contains RPMs for Kata, QEMU, and its dependencies. You can enable them by using the MachineConfig resources that the Machine Config Operator provides. Additional resources Adding extensions to RHCOS 1.6. About Confidential Containers Confidential Containers provides a confidential computing environment to protect containers and data by leveraging Trusted Execution Environments . Important Confidential Containers on Microsoft Azure Cloud Computing Services, IBM Z(R), and IBM(R) LinuxONE is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can sign container images by using a tool such as Red Hat Trusted Artifact Signer . Then, you create a container image signature verification policy. The Trustee Operator verifies the signatures, ensuring that only trusted and authenticated container images are deployed in your environment. For more information, see Exploring the OpenShift Confidential Containers solution . 1.7. OpenShift Virtualization You can deploy OpenShift sandboxed containers on clusters with OpenShift Virtualization. To run OpenShift Virtualization and OpenShift sandboxed containers at the same time, your virtual machines must be live migratable so that they do not block node reboots. See About live migration in the OpenShift Virtualization documentation for details. 1.8. Block volume support OpenShift Container Platform can statically provision raw block volumes. These volumes do not have a file system, and can provide performance benefits for applications that either write to the disk directly or implement their own storage service. You can use a local block device as persistent volume (PV) storage with OpenShift sandboxed containers. This block device can be provisioned by using the Local Storage Operator (LSO). The Local Storage Operator is not installed in OpenShift Container Platform by default. See Installing the Local Storage Operator for installation instructions. You can provision raw block volumes for OpenShift sandboxed containers by specifying volumeMode: Block in the PV specification. Block volume example apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" spec: nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 storageClassDevices: - storageClassName: "local-sc" forceWipeDevicesAndDestroyAllData: false volumeMode: Block 1 devicePaths: - /path/to/device 2 1 Set volumeMode to Block to indicate that this PV is a raw block volume. 2 Replace this value with the filepath to your LocalVolume resource by-id . PVs are created for these local disks when the provisioner is deployed successfully. You must also use this path to label the node that uses the block device when deploying OpenShift sandboxed containers. 1.9. FIPS compliance OpenShift Container Platform is designed for Federal Information Processing Standards (FIPS) 140-2 and 140-3. When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures. For more information about the NIST validation program, see Cryptographic Module Validation Program . For the latest NIST status for the individual versions of RHEL cryptographic libraries that have been submitted for validation, see Compliance Activities and Government Standards . OpenShift sandboxed containers can be used on FIPS enabled clusters. When running in FIPS mode, OpenShift sandboxed containers components, VMs, and VM images are adapted to comply with FIPS. Note FIPS compliance for OpenShift sandboxed containers only applies to the kata runtime class. The peer pod runtime class, kata-remote , is not yet fully supported and has not been tested for FIPS compliance. FIPS compliance is one of the most critical components required in highly secure environments, to ensure that only supported cryptographic technologies are allowed on nodes. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. To understand Red Hat's view of OpenShift Container Platform compliance frameworks, refer to the Risk Management and Regulatory Readiness chapter of the OpenShift Security Guide Book .
[ "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" spec: nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 storageClassDevices: - storageClassName: \"local-sc\" forceWipeDevicesAndDestroyAllData: false volumeMode: Block 1 devicePaths: - /path/to/device 2" ]
https://docs.redhat.com/en/documentation/openshift_sandboxed_containers/1.8/html/user_guide/about-osc
Chapter 6. Management of OSDs using the Ceph Orchestrator
Chapter 6. Management of OSDs using the Ceph Orchestrator As a storage administrator, you can use the Ceph Orchestrators to manage OSDs of a Red Hat Ceph Storage cluster. 6.1. Ceph OSDs When a Red Hat Ceph Storage cluster is up and running, you can add OSDs to the storage cluster at runtime. A Ceph OSD generally consists of one ceph-osd daemon for one storage drive and its associated journal within a node. If a node has multiple storage drives, then map one ceph-osd daemon for each drive. Red Hat recommends checking the capacity of a cluster regularly to see if it is reaching the upper end of its storage capacity. As a storage cluster reaches its near full ratio, add one or more OSDs to expand the storage cluster's capacity. When you want to reduce the size of a Red Hat Ceph Storage cluster or replace the hardware, you can also remove an OSD at runtime. If the node has multiple storage drives, you might also need to remove one of the ceph-osd daemon for that drive. Generally, it's a good idea to check the capacity of the storage cluster to see if you are reaching the upper end of its capacity. Ensure that when you remove an OSD that the storage cluster is not at its near full ratio. Important Do not let a storage cluster reach the full ratio before adding an OSD. OSD failures that occur after the storage cluster reaches the near full ratio can cause the storage cluster to exceed the full ratio. Ceph blocks write access to protect the data until you resolve the storage capacity issues. Do not remove OSDs without considering the impact on the full ratio first. 6.2. Ceph OSD node configuration Configure Ceph OSDs and their supporting hardware similarly as a storage strategy for the pool(s) that will use the OSDs. Ceph prefers uniform hardware across pools for a consistent performance profile. For best performance, consider a CRUSH hierarchy with drives of the same type or size. If you add drives of dissimilar size, adjust their weights accordingly. When you add the OSD to the CRUSH map, consider the weight for the new OSD. Hard drive capacity grows approximately 40% per year, so newer OSD nodes might have larger hard drives than older nodes in the storage cluster, that is, they might have a greater weight. Before doing a new installation, review the Requirements for Installing Red Hat Ceph Storage chapter in the Installation Guide . 6.3. Automatically tuning OSD memory The OSD daemons adjust the memory consumption based on the osd_memory_target configuration option. The option osd_memory_target sets OSD memory based upon the available RAM in the system. If Red Hat Ceph Storage is deployed on dedicated nodes that do not share memory with other services, cephadm automatically adjusts the per-OSD consumption based on the total amount of RAM and the number of deployed OSDs. Important By default, the osd_memory_target_autotune parameter is set to true in the Red Hat Ceph Storage cluster. Syntax Cephadm starts with a fraction mgr/cephadm/autotune_memory_target_ratio , which defaults to 0.7 of the total RAM in the system, subtract off any memory consumed by non-autotuned daemons such as non-OSDS and for OSDs for which osd_memory_target_autotune is false, and then divide by the remaining OSDs. The osd_memory_target parameter is calculated as follows: Syntax SPACE_ALLOCATED_FOR_OTHER_DAEMONS may optionally include the following daemon space allocations: Alertmanager: 1 GB Grafana: 1 GB Ceph Manager: 4 GB Ceph Monitor: 2 GB Node-exporter: 1 GB Prometheus: 1 GB For example, if a node has 24 OSDs and has 251 GB RAM space, then osd_memory_target is 7860684936 . The final targets are reflected in the configuration database with options. You can view the limits and the current memory consumed by each daemon from the ceph orch ps output under MEM LIMIT column. Note The default setting of osd_memory_target_autotune true is unsuitable for hyperconverged infrastructures where compute and Ceph storage services are colocated. In a hyperconverged infrastructure, the autotune_memory_target_ratio can be set to 0.2 to reduce the memory consumption of Ceph. Example You can manually set a specific memory target for an OSD in the storage cluster. Example You can manually set a specific memory target for an OSD host in the storage cluster. Syntax Example Note Enabling osd_memory_target_autotune overwrites existing manual OSD memory target settings. To prevent daemon memory from being tuned even when the osd_memory_target_autotune option or other similar options are enabled, set the _no_autotune_memory label on the host. Syntax You can exclude an OSD from memory autotuning by disabling the autotune option and setting a specific memory target. Example 6.4. Listing devices for Ceph OSD deployment You can check the list of available devices before deploying OSDs using the Ceph Orchestrator. The commands are used to print a list of devices discoverable by Cephadm. A storage device is considered available if all of the following conditions are met: The device must have no partitions. The device must not have any LVM state. The device must not be mounted. The device must not contain a file system. The device must not contain a Ceph BlueStore OSD. The device must be larger than 5 GB. Note Ceph will not provision an OSD on a device that is not available. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. All manager and monitor daemons are deployed. Procedure Log into the Cephadm shell: Example List the available devices to deploy OSDs: Syntax Example Using the --wide option provides all details relating to the device, including any reasons that the device might not be eligible for use as an OSD. This option does not support NVMe devices. Optional: To enable Health , Ident , and Fault fields in the output of ceph orch device ls , run the following commands: Note These fields are supported by libstoragemgmt library and currently supports SCSI, SAS, and SATA devices. As root user outside the Cephadm shell, check your hardware's compatibility with libstoragemgmt library to avoid unplanned interruption to services: Example In the output, you see the Health Status as Good with the respective SCSI VPD 0x83 ID. Note If you do not get this information, then enabling the fields might cause erratic behavior of devices. Log back into the Cephadm shell and enable libstoragemgmt support: Example Once this is enabled, ceph orch device ls gives the output of Health field as Good . Verification List the devices: Example 6.5. Zapping devices for Ceph OSD deployment You need to check the list of available devices before deploying OSDs. If there is no space available on the devices, you can clear the data on the devices by zapping them. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. All manager and monitor daemons are deployed. Procedure Log into the Cephadm shell: Example List the available devices to deploy OSDs: Syntax Example Clear the data of a device: Syntax Example Verification Verify the space is available on the device: Example You will see that the field under Available is Yes . Additional Resources See the Listing devices for Ceph OSD deployment section in the Red Hat Ceph Storage Operations Guide for more information. 6.6. Deploying Ceph OSDs on all available devices You can deploy all OSDS on all the available devices. Cephadm allows the Ceph Orchestrator to discover and deploy the OSDs on any available and unused storage device. To deploy OSDs all available devices, run the command without the unmanaged parameter and then re-run the command with the parameter to prevent from creating future OSDs. Note The deployment of OSDs with --all-available-devices is generally used for smaller clusters. For larger clusters, use the OSD specification file. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. All manager and monitor daemons are deployed. Procedure Log into the Cephadm shell: Example List the available devices to deploy OSDs: Syntax Example Deploy OSDs on all available devices: Example The effect of ceph orch apply is persistent which means that the Orchestrator automatically finds the device, adds it to the cluster, and creates new OSDs. This occurs under the following conditions: New disks or drives are added to the system. Existing disks or drives are zapped. An OSD is removed and the devices are zapped. You can disable automatic creation of OSDs on all the available devices by using the --unmanaged parameter. Example Setting the parameter --unmanaged to true disables the creation of OSDs and also there is no change if you apply a new OSD service. Note The command ceph orch daemon add creates new OSDs, but does not add an OSD service. Verification List the service: Example View the details of the node and devices: Example Additional Resources See the Listing devices for Ceph OSD deployment section in the Red Hat Ceph Storage Operations Guide . 6.7. Deploying Ceph OSDs on specific devices and hosts You can deploy all the Ceph OSDs on specific devices and hosts using the Ceph Orchestrator. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. All manager and monitor daemons are deployed. Procedure Log into the Cephadm shell: Example List the available devices to deploy OSDs: Syntax Example Deploy OSDs on specific devices and hosts: Syntax Example To deploy ODSs on a raw physical device, without an LVM layer, use the --method raw option. Syntax Example Note If you have separate DB or WAL devices, the ratio of block to DB or WAL devices MUST be 1:1. Verification List the service: Example View the details of the node and devices: Example List the hosts, daemons, and processes: Syntax Example Additional Resources See the Listing devices for Ceph OSD deployment section in the Red Hat Ceph Storage Operations Guide . 6.8. Advanced service specifications and filters for deploying OSDs Service Specification of type OSD is a way to describe a cluster layout using the properties of disks. It gives the user an abstract way to tell Ceph which disks should turn into an OSD with the required configuration without knowing the specifics of device names and paths. For each device and each host, define a yaml file or a json file. General settings for OSD specifications service_type : 'osd': This is mandatory to create OSDS service_id : Use the service name or identification you prefer. A set of OSDs is created using the specification file. This name is used to manage all the OSDs together and represent an Orchestrator service. placement : This is used to define the hosts on which the OSDs need to be deployed. You can use on the following options: host_pattern : '*' - A host name pattern used to select hosts. label : 'osd_host' - A label used in the hosts where OSD need to be deployed. hosts : 'host01', 'host02' - An explicit list of host names where OSDs needs to be deployed. selection of devices : The devices where OSDs are created. This allows us to separate an OSD from different devices. You can create only BlueStore OSDs which have three components: OSD data: contains all the OSD data WAL: BlueStore internal journal or write-ahead Log DB: BlueStore internal metadata data_devices : Define the devices to deploy OSD. In this case, OSDs are created in a collocated schema. You can use filters to select devices and folders. wal_devices : Define the devices used for WAL OSDs. You can use filters to select devices and folders. db_devices : Define the devices for DB OSDs. You can use the filters to select devices and folders. encrypted : An optional parameter to encrypt information on the OSD which can set to either True or False unmanaged : An optional parameter, set to False by default. You can set it to True if you do not want the Orchestrator to manage the OSD service. block_wal_size : User-defined value, in bytes. block_db_size : User-defined value, in bytes. osds_per_device : User-defined value for deploying more than one OSD per device. method : An optional parameter to specify if an OSD is created with an LVM layer or not. Set to raw if you want to create OSDs on raw physical devices that do not include an LVM layer. If you have separate DB or WAL devices, the ratio of block to DB or WAL devices MUST be 1:1. Filters for specifying devices Filters are used in conjunction with the data_devices , wal_devices and db_devices parameters. Name of the filter Description Syntax Example Model Target specific disks. You can get details of the model by running lsblk -o NAME,FSTYPE,LABEL,MOUNTPOINT,SIZE,MODEL command or smartctl -i / DEVIVE_PATH Model: DISK_MODEL_NAME Model: MC-55-44-XZ Vendor Target specific disks Vendor: DISK_VENDOR_NAME Vendor: Vendor Cs Size Specification Includes disks of an exact size size: EXACT size: '10G' Size Specification Includes disks size of which is within the range size: LOW:HIGH size: '10G:40G' Size Specification Includes disks less than or equal to in size size: :HIGH size: ':10G' Size Specification Includes disks equal to or greater than in size size: LOW: size: '40G:' Rotational Rotational attribute of the disk. 1 matches all disks that are rotational and 0 matches all the disks that are non-rotational. If rotational =0, then OSD is configured with SSD or NVME. If rotational=1 then the OSD is configured with HDD. rotational: 0 or 1 rotational: 0 All Considers all the available disks all: true all: true Limiter When you have specified valid filters but want to limit the amount of matching disks you can use the 'limit' directive. It should be used only as a last resort. limit: NUMBER limit: 2 Note To create an OSD with non-collocated components in the same host, you have to specify the different types of devices used and the devices should be on the same host. Note The devices used for deploying OSDs must be supported by libstoragemgmt . Additional Resources See the Deploying Ceph OSDs using the advanced specifications section in the Red Hat Ceph Storage Operations Guide . For more information on libstoragemgmt , see the Listing devices for Ceph OSD deployment section in the Red Hat Ceph Storage Operations Guide . 6.9. Deploying Ceph OSDs using advanced service specifications The service specification of type OSD is a way to describe a cluster layout using the properties of disks. It gives the user an abstract way to tell Ceph which disks should turn into an OSD with the required configuration without knowing the specifics of device names and paths. You can deploy the OSD for each device and each host by defining a yaml file or a json file. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. All manager and monitor daemons are deployed. Procedure On the monitor node, create the osd_spec.yaml file: Example Edit the osd_spec.yaml file to include the following details: Syntax Simple scenarios: In these cases, all the nodes have the same set-up. Example Example Simple scenario: In this case, all the nodes have the same setup with OSD devices created in raw mode, without an LVM layer. Example Advanced scenario: This would create the desired layout by using all HDDs as data_devices with two SSD assigned as dedicated DB or WAL devices. The remaining SSDs are data_devices that have the NVMEs vendors assigned as dedicated DB or WAL devices. Example Advanced scenario with non-uniform nodes: This applies different OSD specs to different hosts depending on the host_pattern key. Example Advanced scenario with dedicated WAL and DB devices: Example Advanced scenario with multiple OSDs per device: Example For pre-created volumes, edit the osd_spec.yaml file to include the following details: Syntax Example For OSDs by ID, edit the osd_spec.yaml file to include the following details: Note This configuration is applicable for Red Hat Ceph Storage 5.3z1 and later releases. For earlier releases, use pre-created lvm. Syntax Example For OSDs by path, edit the osd_spec.yaml file to include the following details: Note This configuration is applicable for Red Hat Ceph Storage 5.3z1 and later releases. For earlier releases, use pre-created lvm. Syntax Example Mount the YAML file under a directory in the container: Example Navigate to the directory: Example Before deploying OSDs, do a dry run: Note This step gives a preview of the deployment, without deploying the daemons. Example Deploy OSDs using service specification: Syntax Example Verification List the service: Example View the details of the node and devices: Example Additional Resources See the Advanced service specifications and filters for deploying OSDs section in the Red Hat Ceph Storage Operations Guide . 6.10. Removing the OSD daemons using the Ceph Orchestrator You can remove the OSD from a cluster by using Cephadm. Removing an OSD from a cluster involves two steps: Evacuates all placement groups (PGs) from the cluster. Removes the PG-free OSDs from the cluster. The --zap option removed the volume groups, logical volumes, and the LVM metadata. Note After removing OSDs, if the drives the OSDs were deployed on once again become available, cephadm` might automatically try to deploy more OSDs on these drives if they match an existing drivegroup specification. If you deployed the OSDs you are removing with a spec and do not want any new OSDs deployed on the drives after removal, modify the drivegroup specification before removal. While deploying OSDs, if you have used --all-available-devices option, set unmanaged: true to stop it from picking up new drives at all. For other deployments, modify the specification. See the Deploying Ceph OSDs using advanced service specifications for more details. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. Ceph Monitor, Ceph Manager and Ceph OSD daemons are deployed on the storage cluster. Procedure Log into the Cephadm shell: Example Check the device and the node from which the OSD has to be removed: Example Remove the OSD: Syntax Example Note If you remove the OSD from the storage cluster without an option, such as --replace , the device is removed from the storage cluster completely. If you want to use the same device for deploying OSDs, you have to first zap the device before adding it to the storage cluster. Optional: To remove multiple OSDs from a specific node, run the following command: Syntax Example Check the status of the OSD removal: Example When no PGs are left on the OSD, it is decommissioned and removed from the cluster. Verification Verify the details of the devices and the nodes from which the Ceph OSDs are removed: Example Additional Resources See the Deploying Ceph OSDs on all available devices section in the Red Hat Ceph Storage Operations Guide for more information. See the Deploying Ceph OSDs on specific devices and hosts section in the Red Hat Ceph Storage Operations Guide for more information. See the Zapping devices for Ceph OSD deployment section in the Red Hat Ceph Storage Operations Guide for more information on clearing space on devices. 6.11. Replacing the OSDs using the Ceph Orchestrator When disks fail, you can replace the physical storage device and reuse the same OSD ID to avoid having to reconfigure the CRUSH map. You can replace the OSDs from the cluster using the --replace option. Note If you want to replace a single OSD, see Deploying Ceph OSDs on specific devices and hosts . If you want to deploy OSDs on all available devices, see Deploying Ceph OSDs on all available devices . This option preserves the OSD ID using the ceph orch rm command. The OSD is not permanently removed from the CRUSH hierarchy, but is assigned the destroyed flag. This flag is used to determine the OSD IDs that can be reused in the OSD deployment. The destroyed flag is used to determine which OSD id is reused in the OSD deployment. Similar to rm command, replacing an OSD from a cluster involves two steps: Evacuating all placement groups (PGs) from the cluster. Removing the PG-free OSD from the cluster. If you use OSD specification for deployment, your newly added disk is assigned the OSD ID of their replaced counterparts. Note After removing OSDs, if the drives the OSDs were deployed on once again become available, cephadm might automatically try to deploy more OSDs on these drives if they match an existing drivegroup specification. If you deployed the OSDs you are removing with a spec and do not want any new OSDs deployed on the drives after removal, modify the drivegroup specification before removal. While deploying OSDs, if you have used --all-available-devices option, set unmanaged: true to stop it from picking up new drives at all. For other deployments, modify the specification. See the Deploying Ceph OSDs using advanced service specifications for more details. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. Monitor, Manager, and OSD daemons are deployed on the storage cluster. A new OSD that replaces the removed OSD must be created on the same host from which the OSD was removed. Procedure Log into the Cephadm shell: Example Ensure to dump and save a mapping of your OSD configurations for future references: Example Check the device and the node from which the OSD has to be replaced: Example Replace the OSD: Important If the storage cluster has health_warn or other errors associated with it, check and try to fix any errors before replacing the OSD to avoid data loss. Syntax The --force option can be used when there are ongoing operations on the storage cluster. Example Check the status of the OSD replacement: Example Stop the orchestrator to apply any existing OSD specification: Example Zap the OSD devices that have been removed: Example Resume the Orcestrator from pause mode Example Check the status of the OSD replacement: Example Verification Verify the details of the devices and the nodes from which the Ceph OSDs are replaced: Example You can see an OSD with the same id as the one you replaced running on the same host. Verify that the db_device for the new deployed OSDs is the replaced db_device : Example Additional Resources See the Deploying Ceph OSDs on all available devices section in the Red Hat Ceph Storage Operations Guide for more information. See the Deploying Ceph OSDs on specific devices and hosts section in the Red Hat Ceph Storage Operations Guide for more information. 6.12. Replacing the OSDs with pre-created LVM After purging the OSD with the ceph-volume lvm zap command, if the directory is not present, then you can replace the OSDs with the OSd service specification file with the pre-created LVM. Prerequisites A running Red Hat Ceph Storage cluster. Failed OSD Procedure Log into the Cephadm shell: Example Remove the OSD: Syntax Example Verify the OSD is destroyed: Example Zap and remove the OSD using the ceph-volume command: Syntax Example Check the OSD topology: Example Recreate the OSD with a specification file corresponding to that specific OSD topology: Example Apply the updated specification file: Example Verify the OSD is back: Example 6.13. Replacing the OSDs in a non-colocated scenario When the an OSD fails in a non-colocated scenario, you can replace the WAL/DB devices. The procedure is the same for DB and WAL devices. You need to edit the paths under db_devices for DB devices and paths under wal_devices for WAL devices. Prerequisites A running Red Hat Ceph Storage cluster. Daemons are non-colocated. Failed OSD Procedure Identify the devices in the cluster: Example Log into the Cephadm shell: Example Identify the OSDs and their DB device: Example In the osds.yaml file, set unmanaged parameter to true , else cephadm redeploys the OSDs: Example Apply the updated specification file: Example Check the status: Example Remove the OSDs. Ensure to use the --zap option to remove hte backend services and the --replace option to retain the OSD IDs: Example Check the status: Example Edit the osds.yaml specification file to change unmanaged parameter to false and replace the path to the DB device if it has changed after the device got physically replaced: Example In the above example, /dev/sdh is replaced with /dev/sde . Important If you use the same host specification file to replace the faulty DB device on a single OSD node, modify the host_pattern option to specify only the OSD node, else the deployment fails and you cannot find the new DB device on other hosts. Reapply the specification file with the --dry-run option to ensure the OSDs shall be deployed with the new DB device: Example Apply the specification file: Example Check the OSDs are redeployed: Example Verification From the OSD host where the OSDS are redeployed, verify if they are on the new DB device: Example 6.14. Stopping the removal of the OSDs using the Ceph Orchestrator You can stop the removal of only the OSDs that are queued for removal. This resets the initial state of the OSD and takes it off the removal queue. If the OSD is in the process of removal, then you cannot stop the process. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. Monitor, Manager and OSD daemons are deployed on the cluster. Remove OSD process initiated. Procedure Log into the Cephadm shell: Example Check the device and the node from which the OSD was initiated to be removed: Example Stop the removal of the queued OSD: Syntax Example Check the status of the OSD removal: Example Verification Verify the details of the devices and the nodes from which the Ceph OSDs were queued for removal: Example Additional Resources See Removing the OSD daemons using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information. 6.15. Activating the OSDs using the Ceph Orchestrator You can activate the OSDs in the cluster in cases where the operating system of the host was reinstalled. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. Monitor, Manager and OSD daemons are deployed on the storage cluster. Procedure Log into the Cephadm shell: Example After the operating system of the host is reinstalled, activate the OSDs: Syntax Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example 6.16. Observing the data migration When you add or remove an OSD to the CRUSH map, Ceph begins rebalancing the data by migrating placement groups to the new or existing OSD(s). You can observe the data migration using ceph-w command. Prerequisites A running Red Hat Ceph Storage cluster. Recently added or removed an OSD. Procedure To observe the data migration: Example Watch as the placement group states change from active+clean to active, some degraded objects , and finally active+clean when migration completes. To exit the utility, press Ctrl + C . 6.17. Recalculating the placement groups Placement groups (PGs) define the spread of any pool data across the available OSDs. A placement group is built upon the given redundancy algorithm to be used. For a 3-way replication, the redundancy is defined to use three different OSDs. For erasure-coded pools, the number of OSDs to use is defined by the number of chunks. When defining a pool the number of placement groups defines the grade of granularity the data is spread with across all available OSDs. The higher the number the better the equalization of capacity load can be. However, since handling the placement groups is also important in case of reconstruction of data, the number is significant to be carefully chosen upfront. To support calculation a tool is available to produce agile environments. During the lifetime of a storage cluster a pool may grow above the initially anticipated limits. With the growing number of drives a recalculation is recommended. The number of placement groups per OSD should be around 100. When adding more OSDs to the storage cluster the number of PGs per OSD will lower over time. Starting with 120 drives initially in the storage cluster and setting the pg_num of the pool to 4000 will end up in 100 PGs per OSD, given with the replication factor of three. Over time, when growing to ten times the number of OSDs, the number of PGs per OSD will go down to ten only. Because a small number of PGs per OSD will tend to an unevenly distributed capacity, consider adjusting the PGs per pool. Adjusting the number of placement groups can be done online. Recalculating is not only a recalculation of the PG numbers, but will involve data relocation, which will be a lengthy process. However, the data availability will be maintained at any time. Very high numbers of PGs per OSD should be avoided, because reconstruction of all PGs on a failed OSD will start at once. A high number of IOPS is required to perform reconstruction in a timely manner, which might not be available. This would lead to deep I/O queues and high latency rendering the storage cluster unusable or will result in long healing times. Additional Resources See the PG calculator for calculating the values by a given use case. See the Erasure Code Pools chapter in the Red Hat Ceph Storage Strategies Guide for more information.
[ "ceph config set osd osd_memory_target_autotune true", "osd_memory_target = TOTAL_RAM_OF_THE_OSD * (1048576) * (autotune_memory_target_ratio) / NUMBER_OF_OSDS_IN_THE_OSD_NODE - ( SPACE_ALLOCATED_FOR_OTHER_DAEMONS )", "ceph config set mgr mgr/cephadm/autotune_memory_target_ratio 0.2", "ceph config set osd.123 osd_memory_target 7860684936", "ceph config set osd/host: HOSTNAME osd_memory_target TARGET_BYTES", "ceph config set osd/host:host01 osd_memory_target 1000000000", "ceph orch host label add HOSTNAME _no_autotune_memory", "ceph config set osd.123 osd_memory_target_autotune false ceph config set osd.123 osd_memory_target 16G", "cephadm shell", "ceph orch device ls [--hostname= HOSTNAME_1 HOSTNAME_2 ] [--wide] [--refresh]", "ceph orch device ls --wide --refresh", "cephadm shell lsmcli ldl", "cephadm shell ceph config set mgr mgr/cephadm/device_enhanced_scan true", "ceph orch device ls", "cephadm shell", "ceph orch device ls [--hostname= HOSTNAME_1 HOSTNAME_2 ] [--wide] [--refresh]", "ceph orch device ls --wide --refresh", "ceph orch device zap HOSTNAME FILE_PATH --force", "ceph orch device zap host02 /dev/sdb --force", "ceph orch device ls", "cephadm shell", "ceph orch device ls [--hostname= HOSTNAME_1 HOSTNAME_2 ] [--wide] [--refresh]", "ceph orch device ls --wide --refresh", "ceph orch apply osd --all-available-devices", "ceph orch apply osd --all-available-devices --unmanaged=true", "ceph orch ls", "ceph osd tree", "cephadm shell", "ceph orch device ls [--hostname= HOSTNAME_1 HOSTNAME_2 ] [--wide] [--refresh]", "ceph orch device ls --wide --refresh", "ceph orch daemon add osd HOSTNAME : DEVICE_PATH", "ceph orch daemon add osd host02:/dev/sdb", "ceph orch daemon add osd --method raw HOSTNAME : DEVICE_PATH", "ceph orch daemon add osd --method raw host02:/dev/sdb", "ceph orch ls osd", "ceph osd tree", "ceph orch ps --service_name= SERVICE_NAME", "ceph orch ps --service_name=osd", "touch osd_spec.yaml", "service_type: osd service_id: SERVICE_ID placement: host_pattern: '*' # optional data_devices: # optional model: DISK_MODEL_NAME # optional paths: - / DEVICE_PATH osds_per_device: NUMBER_OF_DEVICES # optional db_devices: # optional size: # optional all: true # optional paths: - / DEVICE_PATH encrypted: true", "service_type: osd service_id: osd_spec_default placement: host_pattern: '*' data_devices: all: true paths: - /dev/sdb encrypted: true", "service_type: osd service_id: osd_spec_default placement: host_pattern: '*' data_devices: size: '80G' db_devices: size: '40G:' paths: - /dev/sdc", "service_type: osd service_id: all-available-devices encrypted: \"true\" method: raw placement: host_pattern: \"*\" data_devices: all: \"true\"", "service_type: osd service_id: osd_spec_hdd placement: host_pattern: '*' data_devices: rotational: 0 db_devices: model: Model-name limit: 2 --- service_type: osd service_id: osd_spec_ssd placement: host_pattern: '*' data_devices: model: Model-name db_devices: vendor: Vendor-name", "service_type: osd service_id: osd_spec_node_one_to_five placement: host_pattern: 'node[1-5]' data_devices: rotational: 1 db_devices: rotational: 0 --- service_type: osd service_id: osd_spec_six_to_ten placement: host_pattern: 'node[6-10]' data_devices: model: Model-name db_devices: model: Model-name", "service_type: osd service_id: osd_using_paths placement: hosts: - host01 - host02 data_devices: paths: - /dev/sdb db_devices: paths: - /dev/sdc wal_devices: paths: - /dev/sdd", "service_type: osd service_id: multiple_osds placement: hosts: - host01 - host02 osds_per_device: 4 data_devices: paths: - /dev/sdb", "service_type: osd service_id: SERVICE_ID placement: hosts: - HOSTNAME data_devices: # optional model: DISK_MODEL_NAME # optional paths: - / DEVICE_PATH db_devices: # optional size: # optional all: true # optional paths: - / DEVICE_PATH", "service_type: osd service_id: osd_spec placement: hosts: - machine1 data_devices: paths: - /dev/vg_hdd/lv_hdd db_devices: paths: - /dev/vg_nvme/lv_nvme", "service_type: osd service_id: OSD_BY_ID_HOSTNAME placement: hosts: - HOSTNAME data_devices: # optional model: DISK_MODEL_NAME # optional paths: - / DEVICE_PATH db_devices: # optional size: # optional all: true # optional paths: - / DEVICE_PATH", "service_type: osd service_id: osd_by_id_host01 placement: hosts: - host01 data_devices: paths: - /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-5 db_devices: paths: - /dev/disk/by-id/nvme-nvme.1b36-31323334-51454d55204e564d65204374726c-00000001", "service_type: osd service_id: OSD_BY_PATH_HOSTNAME placement: hosts: - HOSTNAME data_devices: # optional model: DISK_MODEL_NAME # optional paths: - / DEVICE_PATH db_devices: # optional size: # optional all: true # optional paths: - / DEVICE_PATH", "service_type: osd service_id: osd_by_path_host01 placement: hosts: - host01 data_devices: paths: - /dev/disk/by-path/pci-0000:0d:00.0-scsi-0:0:0:4 db_devices: paths: - /dev/disk/by-path/pci-0000:00:02.0-nvme-1", "cephadm shell --mount osd_spec.yaml:/var/lib/ceph/osd/osd_spec.yaml", "cd /var/lib/ceph/osd/", "ceph orch apply -i osd_spec.yaml --dry-run", "ceph orch apply -i FILE_NAME .yml", "ceph orch apply -i osd_spec.yaml", "ceph orch ls osd", "ceph osd tree", "cephadm shell", "ceph osd tree", "ceph orch osd rm OSD_ID [--replace] [--force] --zap", "ceph orch osd rm 0 --zap", "ceph orch osd rm OSD_ID OSD_ID --zap", "ceph orch osd rm 2 5 --zap", "ceph orch osd rm status OSD HOST STATE PGS REPLACE FORCE ZAP DRAIN STARTED AT 9 host01 done, waiting for purge 0 False False True 2023-06-06 17:50:50.525690 10 host03 done, waiting for purge 0 False False True 2023-06-06 17:49:38.731533 11 host02 done, waiting for purge 0 False False True 2023-06-06 17:48:36.641105", "ceph osd tree", "cephadm shell", "ceph osd metadata -f plain | grep device_paths \"device_paths\": \"sde=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:1,sdi=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1\", \"device_paths\": \"sde=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:1,sdf=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1\", \"device_paths\": \"sdd=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:2,sdg=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:2\", \"device_paths\": \"sdd=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:2,sdh=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:2\", \"device_paths\": \"sdd=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:2,sdk=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:2\", \"device_paths\": \"sdc=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:3,sdl=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:3\", \"device_paths\": \"sdc=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:3,sdj=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:3\", \"device_paths\": \"sdc=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:3,sdm=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:3\", [.. output omitted ..]", "ceph osd tree", "ceph orch osd rm OSD_ID --replace [--force]", "ceph orch osd rm 0 --replace", "ceph orch osd rm status", "ceph orch pause ceph orch status Backend: cephadm Available: Yes Paused: Yes", "ceph orch device zap node.example.com /dev/sdi --force zap successful for /dev/sdi on node.example.com ceph orch device zap node.example.com /dev/sdf --force zap successful for /dev/sdf on node.example.com", "ceph orch resume", "ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.77112 root default -3 0.77112 host node 0 hdd 0.09639 osd.0 up 1.00000 1.00000 1 hdd 0.09639 osd.1 up 1.00000 1.00000 2 hdd 0.09639 osd.2 up 1.00000 1.00000 3 hdd 0.09639 osd.3 up 1.00000 1.00000 4 hdd 0.09639 osd.4 up 1.00000 1.00000 5 hdd 0.09639 osd.5 up 1.00000 1.00000 6 hdd 0.09639 osd.6 up 1.00000 1.00000 7 hdd 0.09639 osd.7 up 1.00000 1.00000 [.. output omitted ..]", "ceph osd tree", "ceph osd metadata 0 | grep bluefs_db_devices \"bluefs_db_devices\": \"nvme0n1\", ceph osd metadata 1 | grep bluefs_db_devices \"bluefs_db_devices\": \"nvme0n1\",", "cephadm shell", "ceph orch osd rm OSD_ID [--replace]", "ceph orch osd rm 8 --replace Scheduled OSD(s) for removal", "ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.32297 root default -9 0.05177 host host10 3 hdd 0.01520 osd.3 up 1.00000 1.00000 13 hdd 0.02489 osd.13 up 1.00000 1.00000 17 hdd 0.01169 osd.17 up 1.00000 1.00000 -13 0.05177 host host11 2 hdd 0.01520 osd.2 up 1.00000 1.00000 15 hdd 0.02489 osd.15 up 1.00000 1.00000 19 hdd 0.01169 osd.19 up 1.00000 1.00000 -7 0.05835 host host12 20 hdd 0.01459 osd.20 up 1.00000 1.00000 21 hdd 0.01459 osd.21 up 1.00000 1.00000 22 hdd 0.01459 osd.22 up 1.00000 1.00000 23 hdd 0.01459 osd.23 up 1.00000 1.00000 -5 0.03827 host host04 1 hdd 0.01169 osd.1 up 1.00000 1.00000 6 hdd 0.01129 osd.6 up 1.00000 1.00000 7 hdd 0.00749 osd.7 up 1.00000 1.00000 9 hdd 0.00780 osd.9 up 1.00000 1.00000 -3 0.03816 host host05 0 hdd 0.01169 osd.0 up 1.00000 1.00000 8 hdd 0.01129 osd.8 destroyed 0 1.00000 12 hdd 0.00749 osd.12 up 1.00000 1.00000 16 hdd 0.00769 osd.16 up 1.00000 1.00000 -15 0.04237 host host06 5 hdd 0.01239 osd.5 up 1.00000 1.00000 10 hdd 0.01540 osd.10 up 1.00000 1.00000 11 hdd 0.01459 osd.11 up 1.00000 1.00000 -11 0.04227 host host07 4 hdd 0.01239 osd.4 up 1.00000 1.00000 14 hdd 0.01529 osd.14 up 1.00000 1.00000 18 hdd 0.01459 osd.18 up 1.00000 1.00000", "ceph-volume lvm zap --osd-id OSD_ID", "ceph-volume lvm zap --osd-id 8 Zapping: /dev/vg1/data-lv2 Closing encrypted path /dev/mapper/l4D6ql-Prji-IzH4-dfhF-xzuf-5ETl-jNRcXC Running command: /usr/sbin/cryptsetup remove /dev/mapper/l4D6ql-Prji-IzH4-dfhF-xzuf-5ETl-jNRcXC Running command: /usr/bin/dd if=/dev/zero of=/dev/vg1/data-lv2 bs=1M count=10 conv=fsync stderr: 10+0 records in 10+0 records out stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.034742 s, 302 MB/s Zapping successful for OSD: 8", "ceph-volume lvm list", "cat osd.yml service_type: osd service_id: osd_service placement: hosts: - host03 data_devices: paths: - /dev/vg1/data-lv2 db_devices: paths: - /dev/vg1/db-lv1", "ceph orch apply -i osd.yml Scheduled osd.osd_service update", "ceph -s ceph osd tree", "lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 19G 0 part ├─rhel-root 253:0 0 17G 0 lvm / └─rhel-swap 253:1 0 2G 0 lvm [SWAP] sdb 8:16 0 10G 0 disk └─ceph--5726d3e9--4fdb--4eda--b56a--3e0df88d663f-osd--block--3ceb89ec--87ef--46b4--99c6--2a56bac09ff0 253:2 0 10G 0 lvm sdc 8:32 0 10G 0 disk └─ceph--d7c9ab50--f5c0--4be0--a8fd--e0313115f65c-osd--block--37c370df--1263--487f--a476--08e28bdbcd3c 253:4 0 10G 0 lvm sdd 8:48 0 10G 0 disk ├─ceph--1774f992--44f9--4e78--be7b--b403057cf5c3-osd--db--31b20150--4cbc--4c2c--9c8f--6f624f3bfd89 253:7 0 2.5G 0 lvm └─ceph--1774f992--44f9--4e78--be7b--b403057cf5c3-osd--db--1bee5101--dbab--4155--a02c--e5a747d38a56 253:9 0 2.5G 0 lvm sde 8:64 0 10G 0 disk sdf 8:80 0 10G 0 disk └─ceph--412ee99b--4303--4199--930a--0d976e1599a2-osd--block--3a99af02--7c73--4236--9879--1fad1fe6203d 253:6 0 10G 0 lvm sdg 8:96 0 10G 0 disk └─ceph--316ca066--aeb6--46e1--8c57--f12f279467b4-osd--block--58475365--51e7--42f2--9681--e0c921947ae6 253:8 0 10G 0 lvm sdh 8:112 0 10G 0 disk ├─ceph--d7064874--66cb--4a77--a7c2--8aa0b0125c3c-osd--db--0dfe6eca--ba58--438a--9510--d96e6814d853 253:3 0 5G 0 lvm └─ceph--d7064874--66cb--4a77--a7c2--8aa0b0125c3c-osd--db--26b70c30--8817--45de--8843--4c0932ad2429 253:5 0 5G 0 lvm sr0", "cephadm shell", "ceph-volume lvm list /dev/sdh ====== osd.2 ======= [db] /dev/ceph-d7064874-66cb-4a77-a7c2-8aa0b0125c3c/osd-db-0dfe6eca-ba58-438a-9510-d96e6814d853 block device /dev/ceph-5726d3e9-4fdb-4eda-b56a-3e0df88d663f/osd-block-3ceb89ec-87ef-46b4-99c6-2a56bac09ff0 block uuid GkWLoo-f0jd-Apj2-Zmwj-ce0h-OY6J-UuW8aD cephx lockbox secret cluster fsid fa0bd9dc-e4c4-11ed-8db4-001a4a00046e cluster name ceph crush device class db device /dev/ceph-d7064874-66cb-4a77-a7c2-8aa0b0125c3c/osd-db-0dfe6eca-ba58-438a-9510-d96e6814d853 db uuid 6gSPoc-L39h-afN3-rDl6-kozT-AX9S-XR20xM encrypted 0 osd fsid 3ceb89ec-87ef-46b4-99c6-2a56bac09ff0 osd id 2 osdspec affinity non-colocated type db vdo 0 devices /dev/sdh ====== osd.5 ======= [db] /dev/ceph-d7064874-66cb-4a77-a7c2-8aa0b0125c3c/osd-db-26b70c30-8817-45de-8843-4c0932ad2429 block device /dev/ceph-d7c9ab50-f5c0-4be0-a8fd-e0313115f65c/osd-block-37c370df-1263-487f-a476-08e28bdbcd3c block uuid Eay3I7-fcz5-AWvp-kRcI-mJaH-n03V-Zr0wmJ cephx lockbox secret cluster fsid fa0bd9dc-e4c4-11ed-8db4-001a4a00046e cluster name ceph crush device class db device /dev/ceph-d7064874-66cb-4a77-a7c2-8aa0b0125c3c/osd-db-26b70c30-8817-45de-8843-4c0932ad2429 db uuid mwSohP-u72r-DHcT-BPka-piwA-lSwx-w24N0M encrypted 0 osd fsid 37c370df-1263-487f-a476-08e28bdbcd3c osd id 5 osdspec affinity non-colocated type db vdo 0 devices /dev/sdh", "cat osds.yml service_type: osd service_id: non-colocated unmanaged: true placement: host_pattern: 'ceph*' data_devices: paths: - /dev/sdb - /dev/sdc - /dev/sdf - /dev/sdg db_devices: paths: - /dev/sdd - /dev/sdh", "ceph orch apply -i osds.yml Scheduled osd.non-colocated update", "ceph orch ls NAME PORTS RUNNING REFRESHED AGE PLACEMENT alertmanager ?:9093,9094 1/1 9m ago 4d count:1 crash 3/4 4d ago 4d * grafana ?:3000 1/1 9m ago 4d count:1 mgr 1/2 4d ago 4d count:2 mon 3/5 4d ago 4d count:5 node-exporter ?:9100 3/4 4d ago 4d * osd.non-colocated 8 4d ago 5s <unmanaged> prometheus ?:9095 1/1 9m ago 4d count:1", "ceph orch osd rm 2 5 --zap --replace Scheduled OSD(s) for removal", "ceph osd df tree | egrep -i \"ID|host02|osd.2|osd.5\" ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME -5 0.04877 - 55 GiB 15 GiB 4.1 MiB 0 B 60 MiB 40 GiB 27.27 1.17 - host02 2 hdd 0.01219 1.00000 15 GiB 5.0 GiB 996 KiB 0 B 15 MiB 10 GiB 33.33 1.43 0 destroyed osd.2 5 hdd 0.01219 1.00000 15 GiB 5.0 GiB 1.0 MiB 0 B 15 MiB 10 GiB 33.33 1.43 0 destroyed osd.5", "cat osds.yml service_type: osd service_id: non-colocated unmanaged: false placement: host_pattern: 'ceph01*' data_devices: paths: - /dev/sdb - /dev/sdc - /dev/sdf - /dev/sdg db_devices: paths: - /dev/sdd - /dev/sde", "ceph orch apply -i osds.yml --dry-run WARNING! Dry-Runs are snapshots of a certain point in time and are bound to the current inventory setup. If any of these conditions change, the preview will be invalid. Please make sure to have a minimal timeframe between planning and applying the specs. #################### SERVICESPEC PREVIEWS #################### +---------+------+--------+-------------+ |SERVICE |NAME |ADD_TO |REMOVE_FROM | +---------+------+--------+-------------+ +---------+------+--------+-------------+ ################ OSDSPEC PREVIEWS ################ +---------+-------+-------+----------+----------+-----+ |SERVICE |NAME |HOST |DATA |DB |WAL | +---------+-------+-------+----------+----------+-----+ |osd |non-colocated |host02 |/dev/sdb |/dev/sde |- | |osd |non-colocated |host02 |/dev/sdc |/dev/sde |- | +---------+-------+-------+----------+----------+-----+", "ceph orch apply -i osds.yml Scheduled osd.non-colocated update", "ceph osd df tree | egrep -i \"ID|host02|osd.2|osd.5\" ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME -5 0.04877 - 55 GiB 15 GiB 4.5 MiB 0 B 60 MiB 40 GiB 27.27 1.17 - host host02 2 hdd 0.01219 1.00000 15 GiB 5.0 GiB 1.1 MiB 0 B 15 MiB 10 GiB 33.33 1.43 0 up osd.2 5 hdd 0.01219 1.00000 15 GiB 5.0 GiB 1.1 MiB 0 B 15 MiB 10 GiB 33.33 1.43 0 up osd.5", "ceph-volume lvm list /dev/sde ====== osd.2 ======= [db] /dev/ceph-15ce813a-8a4c-46d9-ad99-7e0845baf15e/osd-db-1998a02e-5e67-42a9-b057-e02c22bbf461 block device /dev/ceph-a4afcb78-c804-4daf-b78f-3c7ad1ed0379/osd-block-564b3d2f-0f85-4289-899a-9f98a2641979 block uuid ITPVPa-CCQ5-BbFa-FZCn-FeYt-c5N4-ssdU41 cephx lockbox secret cluster fsid fa0bd9dc-e4c4-11ed-8db4-001a4a00046e cluster name ceph crush device class db device /dev/ceph-15ce813a-8a4c-46d9-ad99-7e0845baf15e/osd-db-1998a02e-5e67-42a9-b057-e02c22bbf461 db uuid HF1bYb-fTK7-0dcB-CHzW-xvNn-dCym-KKdU5e encrypted 0 osd fsid 564b3d2f-0f85-4289-899a-9f98a2641979 osd id 2 osdspec affinity non-colocated type db vdo 0 devices /dev/sde ====== osd.5 ======= [db] /dev/ceph-15ce813a-8a4c-46d9-ad99-7e0845baf15e/osd-db-6c154191-846d-4e63-8c57-fc4b99e182bd block device /dev/ceph-b37c8310-77f9-4163-964b-f17b4c29c537/osd-block-b42a4f1f-8e19-4416-a874-6ff5d305d97f block uuid 0LuPoz-ao7S-UL2t-BDIs-C9pl-ct8J-xh5ep4 cephx lockbox secret cluster fsid fa0bd9dc-e4c4-11ed-8db4-001a4a00046e cluster name ceph crush device class db device /dev/ceph-15ce813a-8a4c-46d9-ad99-7e0845baf15e/osd-db-6c154191-846d-4e63-8c57-fc4b99e182bd db uuid SvmXms-iWkj-MTG7-VnJj-r5Mo-Moiw-MsbqVD encrypted 0 osd fsid b42a4f1f-8e19-4416-a874-6ff5d305d97f osd id 5 osdspec affinity non-colocated type db vdo 0 devices /dev/sde", "cephadm shell", "ceph osd tree", "ceph orch osd rm stop OSD_ID", "ceph orch osd rm stop 0", "ceph orch osd rm status", "ceph osd tree", "cephadm shell", "ceph cephadm osd activate HOSTNAME", "ceph cephadm osd activate host03", "ceph orch ls", "ceph orch ps --service_name= SERVICE_NAME", "ceph orch ps --service_name=osd", "ceph -w" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/operations_guide/management-of-osds-using-the-ceph-orchestrator
Chapter 8. Updating and Migrating Identity Management
Chapter 8. Updating and Migrating Identity Management 8.1. Updating Identity Management You can use the yum utility to update the Identity Management packages on the system. Warning Before installing an update, make sure you have applied all previously released errata relevant to the RHEL system. For more information, see the How do I apply package updates to my RHEL system? KCS article. Additionally, if a new minor Red Hat Enterprise Linux version is available, such as 7.3, yum upgrades the Identity Management server or client to this version. Note This section does not describe migrating Identity Management from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7. If you want to migrate, see Section 8.2, "Migrating Identity Management from Red Hat Enterprise Linux 6 to Version 7" . 8.1.1. Considerations for Updating Identity Management After you update the Identity Management packages on at least one server, all other servers in the topology receive the updated schema, even if you do not update their packages. This ensures that any new entries which use the new schema can be replicated among the other servers. Downgrading Identity Management packages is not supported. Important Do not run the yum downgrade command on any of the ipa-* packages. Red Hat recommends upgrading to the version only. For example, if you want to upgrade to Identity Management for Red Hat Enterprise Linux 7.4, we recommend upgrading from Identity Management for Red Hat Enterprise Linux 7.3. Upgrading from earlier versions can cause problems. 8.1.2. Using yum to Update the Identity Management Packages To update all Identity Management packages on a server or client: Warning When upgrading multiple Identity Management servers, wait at least 10 minutes between each upgrade. When two or more servers are upgraded simultaneously or with only short intervals between the upgrades, there is not enough time to replicate the post-upgrade data changes throughout the topology, which can result in conflicting replication events. Related Information For details on using the yum utility, see Yum in the System Administrator's Guide . Important Due to CVE-2014-3566 , the Secure Socket Layer version 3 (SSLv3) protocol needs to be disabled in the mod_nss module. You can ensure that by following these steps: Edit the /etc/httpd/conf.d/nss.conf file and set the NSSProtocol parameter to TLSv1.0 (for backward compatibility), TLSv1.1 , and TLSv1.2 . Restart the httpd service. Note that Identity Management in Red Hat Enterprise Linux 7 automatically performs the above steps when the yum update ipa-* command is launched to upgrade the main packages.
[ "yum update ipa-*", "NSSProtocol TLSv1.0,TLSv1.1,TLSv1.2", "systemctl restart httpd.service" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/updating-migrating
Chapter 7. Adjusting IdM Directory Server performance
Chapter 7. Adjusting IdM Directory Server performance You can tune the performance of Identity Management's databases by adjusting LDAP attributes controlling the Directory Server's resources and behavior. To adjust how the Directory Server caches data , see the following procedures: Adjusting the entry cache size Adjusting the database index cache size Re-enabling entry and database cache auto-sizing Adjusting the DN cache size Adjusting the normalized DN cache size To adjust the Directory Server's resource limits , see the following procedures: Adjusting the maximum message size Adjusting the maximum number of file descriptors Adjusting the connection backlog size Adjusting the maximum number of database locks Disabling the Transparent Huge Pages feature To adjust timeouts that have the most influence on performance, see the following procedures: Adjusting the input/output block timeout Adjusting the idle connection timeout Adjusting the replication release timeout To install an IdM server or replica with custom Directory Server settings from an LDIF file, see the following procedure: Installing an IdM server or replica with custom database-settings from an LDIF file 7.1. Adjusting the entry cache size Important Red Hat recommends using the built-in cache auto-sizing feature for optimized performance. Only change this value if you need to purposely deviate from the auto-tuned values. The nsslapd-cachememsize attribute specifies the size, in bytes, for the available memory space for the entry cache. This attribute is one of the most important values for controlling how much physical RAM the directory server uses. If the entry cache size is too small, you might see the following error in the Directory Server error logs in the /var/log/dirsrv/slapd- INSTANCE-NAME /errors log file: Red Hat recommends fitting the entry cache and the database index entry cache in memory. Default value 209715200 (200 MiB) Valid range 500000 - 18446744073709551615 (500 kB - (2 64 -1)) Entry DN location cn= database-name ,cn=ldbm database,cn=plugins,cn=config Prerequisites The LDAP Directory Manager password Procedure Disable automatic cache tuning. Display the database suffixes and their corresponding back ends. This command displays the name of the back end database to each suffix. Use the suffix's database name in the step. Set the entry cache size for the database. This example sets the entry cache for the userroot database to 2 gigabytes. Restart the Directory Server. Monitor the IdM directory server's performance. If it does not change in a desirable way, repeat this procedure and adjust cache-memsize to a different value, or re-enable cache auto-sizing. Verification Display the value of the nsslapd-cachememsize attribute and verify it has been set to your desired value. Additional resources nsslapd-cachememsize in Directory Server 11 documentation Re-enabling entry and database cache auto-sizing . 7.2. Adjusting the database index cache size Important Red Hat recommends using the built-in cache auto-sizing feature for optimized performance. Only change this value if you need to purposely deviate from the auto-tuned values. The nsslapd-dbcachesize attribute controls the amount of memory the database indexes use. This cache size has less of an impact on Directory Server performance than the entry cache size does, but if there is available RAM after the entry cache size is set, Red Hat recommends increasing the amount of memory allocated to the database cache. The database cache is limited to 1.5 GB RAM because higher values do not improve performance. Default value 10000000 (10 MB) Valid range 500000 - 1610611911 (500 kB - 1.5GB) Entry DN location cn=config,cn=ldbm database,cn=plugins,cn=config Prerequisites The LDAP Directory Manager password Procedure Disable automatic cache tuning, and set the database cache size. This example sets the database cache to 256 megabytes. Restart the Directory Server. Monitor the IdM directory server's performance. If it does not change in a desirable way, repeat this procedure and adjust dbcachesize to a different value, or re-enable cache auto-sizing. Verification Display the value of the nsslapd-dbcachesize attribute and verify it has been set to your desired value. Additional resources nsslapd-dbcachesize in Directory Server 11 documentation Re-enabling entry and database cache auto-sizing . 7.3. Re-enabling database and entry cache auto-sizing Important Use the built-in cache auto-sizing feature for optimized performance. Do not set cache sizes manually. By default, the IdM Directory Server automatically determines the optimal size for the database cache and entry cache. Auto-sizing sets aside a portion of free RAM and optimizes the size of both caches based on the hardware resources of the server when the instance starts. Use this procedure to undo custom database cache and entry cache values and restore the cache auto-sizing feature to its default values. nsslapd-cache-autosize This settings controls how much free RAM is allocated for auto-sizing the database and entry caches. A value of 0 disables auto-sizing. Default value 10 (10% of free RAM) Valid range 0 - 100 Entry DN location cn=config,cn=ldbm database,cn=plugins,cn=config nsslapd-cache-autosize-split This value sets the percentage of free memory determined by nsslapd-cache-autosize that is used for the database cache. The remaining percentage is used for the entry cache. Default value 25 (25% for the database cache, 60% for the entry cache) Valid range 0 - 100 Entry DN location cn=config,cn=ldbm database,cn=plugins,cn=config Prerequisites You have previously disabled database and entry cache auto-tuning. Procedure Stop the Directory Server. Backup the /etc/dirsrv/ slapd-instance_name /dse.ldif file before making any further modifications. Edit the /etc/dirsrv/ slapd-instance_name /dse.ldif file: Set the percentage of free system RAM to use for the database and entry caches back to the default of 10% of free RAM. Set the percentage used from the free system RAM for the database cache to the default of 25%: Save your changes to the /etc/dirsrv/ slapd-instance_name /dse.ldif file. Start the Directory Server. Verification Display the values of the nsslapd-cache-autosize and nsslapd-cache-autosize-split attributes and verify they have been set to your desired values. Additional resources nsslapd-cache-autosize in Directory Server 11 documentation 7.4. Adjusting the DN cache size Important Red Hat recommends using the built-in cache auto-sizing feature for optimized performance. Only change this value if you need to purposely deviate from the auto-tuned values. The nsslapd-dncachememsize attribute specifies the size, in bytes, for the available memory space for the Distinguished Names (DN) cache. The DN cache is similar to the entry cache for a database, but its table stores only the entry ID and the entry DN, which allows faster lookups for rename and moddn operations. Default value 10485760 (10 MB) Valid range 500000 - 18446744073709551615 (500 kB - (2 64 -1)) Entry DN location cn= database-name ,cn=ldbm database,cn=plugins,cn=config Prerequisites The LDAP Directory Manager password Procedure Optional: Display the database suffixes and their corresponding database names. This command displays the name of the back end database to each suffix. Use the suffix's database name in the step. Set the DN cache size for the database. This example sets the DN cache to 20 megabytes. Restart the Directory Server. Monitor the IdM directory server's performance. If it does not change in a desirable way, repeat this procedure and adjust dncache-memsize to a different value, or back to the default of 10 MB. Verification Display the new value of the nsslapd-dncachememsize attribute and verify it has been set to your desired value. Additional resources nsslapd-dncachememsize in Directory Server 11 documentation 7.5. Adjusting the normalized DN cache size Important Red Hat recommends using the built-in cache auto-sizing feature for optimized performance. Only change this value if you need to purposely deviate from the auto-tuned values. The nsslapd-ndn-cache-max-size attribute controls the size, in bytes, of the cache that stores normalized distinguished names (NDNs). Increasing this value will retain more frequently used DNs in memory. Default value 20971520 (20 MB) Valid range 0 - 2147483647 Entry DN location cn=config Prerequisites The LDAP Directory Manager password Procedure Ensure the NDN cache is enabled. If the cache is off , enable it with the following command. Retrieve the current value of the nsslapd-ndn-cache-max-size parameter and make a note of it before making any adjustments, in case it needs to be restored. Enter the Directory Manager password when prompted. Modify the value of the nsslapd-ndn-cache-max-size attribute. This example increases the value to 41943040 (40 MB). Monitor the IdM directory server's performance. If it does not change in a desirable way, repeat this procedure and adjust nsslapd-ndn-cache-max-size to a different value, or re-enable cache auto-sizing. Verification Display the new value of the nsslapd-ndn-cache-max-size attribute and verify it has been set to your desired value. Additional resources nsslapd-ndn-cache-max-size in Directory Server 11 documentation 7.6. Adjusting the maximum message size The nsslapd-maxbersize attribute sets the maximum size in bytes allowed for an incoming message or LDAP request. Limiting the size of requests prevents some kinds of denial of service attacks. If the maximum message size is too small, you might see the following error in the Directory Server error logs at /var/log/dirsrv/slapd- INSTANCE-NAME /errors : The limit applies to the total size of the LDAP request. For example, if the request is to add an entry and if the entry in the request is larger than the configured value or the default, then the add request is denied. However, the limit is not applied to replication processes. Be cautious before changing this attribute. Default value 2097152 (2 MB) Valid range 0 - 2147483647 (0 to 2 GB) Entry DN location cn=config Prerequisites The LDAP Directory Manager password Procedure Retrieve the current value of the nsslapd-maxbersize parameter and make a note of it before making any adjustments, in case it needs to be restored. Enter the Directory Manager password when prompted. Modify the value of the nsslapd-maxbersize attribute. This example increases the value to 4194304 , 4 MB. Authenticate as the Directory Manager to make the configuration change. Monitor the IdM directory server's performance. If it does not change in a desirable way, repeat this procedure and adjust nsslapd-maxbersize to a different value, or back to the default of 2097152 . Verification Display the value of the nsslapd-maxbersize attribute and verify it has been set to your desired value. Additional resources nsslapd-maxbersize (Maximum Message Size) in Directory Server 11 documentation 7.7. Adjusting the maximum number of file descriptors A value can be defined for the DefaultLimitNOFILE parameter in the /etc/systemd/system.conf file. An administrator with root privileges can set the DefaultLimitNOFILE parameter for the ns-slapd process to a lower value by using the setrlimit command. This value then takes precedence over what is in /etc/systemd/system.conf and is accepted by the Identity Management (IdM) Directory Server (DS) as the value for the nsslapd-maxdescriptors attribute. The nsslapd-maxdescriptors attribute sets the maximum, platform-dependent number of file descriptors that the IdM LDAP uses. File descriptors are used for client connections, log files, sockets, and other resources. If no value is defined in either /etc/systemd/system.conf or by setrlimit , then IdM DS sets the nsslapd-maxdescriptors attribute to 1048576. If an IdM DS administrator later decides to set a new value for nsslapd-maxdescriptors manually, then IdM DS compares the new value with what is defined locally, by setrlimit or in /etc/systemd/system.conf , with the following result: If the new value for nsslapd-maxdescriptors is higher than what is defined locally, then the server rejects the new value setting and continues to enforce the local limit value as the high watermark value. If the new value is lower than what is defined locally, then the new value will be used. This procedure describes how to set a new value for nsslapd-maxdescriptors . Prerequisites The LDAP Directory Manager password Procedure Retrieve the current value of the nsslapd-maxdescriptors parameter and make a note of it before making any adjustments, in case it needs to be restored. Enter the Directory Manager password when prompted. Modify the value of the nsslapd-maxdescriptors attribute. This example increases the value to 8192 . Authenticate as the Directory Manager to make the configuration change. Monitor the IdM directory server's performance. If it does not change in a desirable way, repeat this procedure and adjust nsslapd-maxdescriptors to a different value, or back to the default of 4096 . Verification Display the value of the nsslapd-maxdescriptors attribute and verify it has been set to your desired value. Additional resources nsslapd-maxdescriptors (Maximum File Descriptors) in Directory Server 12 documentation 7.8. Adjusting the connection backlog size The listen service sets the number of sockets available to receive incoming connections. The nsslapd-listen-backlog-size value sets the maximum length of the queue for the sockfd socket before refusing connections. If your IdM environment handles a large amount of connections, consider increasing the value of nsslapd-listen-backlog-size . Default value 128 queue slots Valid range 0 - 9223372036854775807 Entry DN location cn=config Prerequisites The LDAP Directory Manager password Procedure Retrieve the current value of the nsslapd-listen-backlog-size parameter and make a note of it before making any adjustments, in case it needs to be restored. Enter the Directory Manager password when prompted. Modify the value of the nsslapd-listen-backlog-size attribute. This example increases the value to 192 . Authenticate as the Directory Manager to make the configuration change. Verification Display the value of the nsslapd-listen-backlog-size attribute and verify it has been set to your desired value. Additional resources nsslapd-listen-backlog-size) in Directory Server 11 documentation 7.9. Adjusting the maximum number of database locks Lock mechanisms control how many copies of Directory Server processes can run at the same time, and the nsslapd-db-locks parameter sets the maximum number of locks. Increase the maximum number of locks if if you see the following error messages in the /var/log/dirsrv/slapd- instance_name /errors log file: Default value 50000 locks Valid range 0 - 2147483647 Entry DN location cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Prerequisites The LDAP Directory Manager password Procedure Retrieve the current value of the nsslapd-db-locks parameter and make a note of it before making any adjustments, in case it needs to be restored. Modify the value of the locks attribute. This example doubles the value to 100000 locks. Authenticate as the Directory Manager to make the configuration change. Restart the Directory Server. Verification Display the value of the nsslapd-db-locks attribute and verify it has been set to your desired value. Additional resources nsslapd-db-locks in Directory Server 11 documentation 7.10. Disabling the Transparent Huge Pages feature Transparent Huge Pages (THP) Linux memory management feature is enabled by default on RHEL. The THP feature can decrease the IdM Directory Server (DS) performance because DS has sparse memory access patterns. How to disable the feature, see Disabling the Transparent Huge Pages feature in Red Hat Directory Server documentation. Additional resources The negative effects of Transparent Huge Pages (THP) on RHDS 7.11. Adjusting the input/output block timeout The nsslapd-ioblocktimeout attribute sets the amount of time in milliseconds after which the connection to a stalled LDAP client is closed. An LDAP client is considered to be stalled when it has not made any I/O progress for read or write operations. Lower the value of the nsslapd-ioblocktimeout attribute to free up connections sooner. Default value 10000 milliseconds Valid range 0 - 2147483647 Entry DN location cn=config Prerequisites The LDAP Directory Manager password Procedure Retrieve the current value of the nsslapd-ioblocktimeout parameter and make a note of it before making any adjustments, in case it needs to be restored. Enter the Directory Manager password when prompted. Modify the value of the nsslapd-ioblocktimeout attribute. This example lowers the value to 8000 . Authenticate as the Directory Manager to make the configuration change. Monitor the IdM directory server's performance. If it does not change in a desirable way, repeat this procedure and adjust nsslapd-ioblocktimeout to a different value, or back to the default of 10000 . Verification Display the value of the nsslapd-ioblocktimeout attribute and verify it has been set to your desired value. Additional resources nsslapd-ioblocktimeout (IO Block Time Out) in Directory Server 11 documentation 7.12. Adjusting the idle connection timeout The nsslapd-idletimeout attribute sets the amount of time in seconds after which an idle LDAP client connection is closed by the IdM server. A value of 0 means that the server never closes idle connections. Red Hat recommends adjusting this value so stale connections are closed, but active connections are not closed prematurely. Default value 3600 seconds (1 hour) Valid range 0 - 2147483647 Entry DN location cn=config Prerequisites The LDAP Directory Manager password Procedure Retrieve the current value of the nsslapd-idletimeout parameter and make a note of it before making any adjustments, in case it needs to be restored. Enter the Directory Manager password when prompted. Modify the value of the nsslapd-idletimeout attribute. This example lowers the value to 1800 (30 minutes). Authenticate as the Directory Manager to make the configuration change. Monitor the IdM directory server's performance. If it does not change in a desirable way, repeat this procedure and adjust nsslapd-idletimeout to a different value, or back to the default of 3600 . Verification Display the value of the nsslapd-idletimeout attribute and verify it has been set to your desired value. Additional resources nsslapd-idletimeout (Default Idle Timeout) in Directory Server 11 documentation 7.13. Adjusting the replication release timeout An IdM replica is exclusively locked during a replication session with another replica. In some environments, a replica is locked for a long time due to large updates or network congestion, which increases replication latency. You can release a replica after a fixed amount of time by adjusting the repl-release-timeout parameter. Red Hat recommends setting this value between 30 and 120 : If the value is set too low, replicas are constantly reacquiring one another and replicas are not able to send larger updates. A longer timeout can improve high-traffic situations where it is best if a server exclusively accesses a replica for longer amounts of time, but a value higher than 120 seconds slows down replication. Default value 60 seconds Valid range 0 - 2147483647 Recommended range 30 - 120 Prerequisites The LDAP Directory Manager password Procedure Display the database suffixes and their corresponding back ends. This command displays the names of the back end databases to their suffix. Use the suffix name in the step. Modify the value of the repl-release-timeout attribute for the main userroot database. This example increases the value to 90 seconds. Authenticate as the Directory Manager to make the configuration change. Optional: If your IdM environment uses the IdM Certificate Authority (CA), you can modify the value of the repl-release-timeout attribute for the CA database. This example increases the value to 90 seconds. Restart the Directory Server. Monitor the IdM directory server's performance. If it does not change in a desirable way, repeat this procedure and adjust repl-release-timeout to a different value, or back to the default of 60 seconds. Verification Display the value of the nsds5ReplicaReleaseTimeout attribute and verify it has been set to your desired value. Note The Distinguished Name of the suffix in this example is dc=example,dc=com , but the equals sign ( = ) and comma ( , ) must be escaped in the ldapsearch command. Convert the suffix DN to cn=dc\3Dexample\2Cdc\3Dcom with the following escape characters: \3D replacing = \2C replacing , Additional resources nsDS5ReplicaReleaseTimeout in Directory Server 11 documentation 7.14. Installing an IdM server or replica with custom database settings from an LDIF file You can install an IdM server and IdM replicas with custom settings for the Directory Server database. The following procedure shows you how to create an LDAP Data Interchange Format (LDIF) file with database settings, and how to pass those settings to the IdM server and replica installation commands. Prerequisites You have determined custom Directory Server settings that improve the performance of your IdM environment. See Adjusting IdM Directory Server performance . Procedure Create a text file in LDIF format with your custom database settings. Separate LDAP attribute modifications with a dash (-). This example sets non-default values for the idle timeout and maximum file descriptors. Use the --dirsrv-config-file parameter to pass the LDIF file to the installation script. To install an IdM server: To install an IdM replica: Additional resources Options for the ipa-server-install and ipa-replica-install commands 7.15. Additional resources Directory Server 11 Performance Tuning Guide
[ "REASON: entry too large ( 83886080 bytes) for the import buffer size ( 67108864 bytes). Try increasing nsslapd-cachememsize.", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend config set --cache-autosize=0", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend suffix list cn=changelog (changelog) dc=example,dc=com ( userroot ) o=ipaca (ipaca)", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend suffix set --cache-memsize= 2147483648 userroot", "systemctl restart dirsrv.target", "ldapsearch -D \"cn=directory manager\" -w DirectoryManagerPassword -b \"cn= userroot ,cn=ldbm database,cn=plugins,cn=config\" | grep nsslapd-cachememsize nsslapd-cachememsize: 2147483648", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend config set --cache-autosize=0 --dbcachesize=268435456", "systemctl restart dirsrv.target", "ldapsearch -D \"cn=directory manager\" -w DirectoryManagerPassword -b \"cn=config,cn=ldbm database,cn=plugins,cn=config\" | grep nsslapd-dbcachesize nsslapd-dbcachesize: 2147483648", "systemctl stop dirsrv.target", "*cp /etc/dirsrv/ slapd-instance_name /dse.ldif /etc/dirsrv/ slapd-instance_name /dse.ldif.bak.USD(date \"+%F_%H-%M-%S\")", "nsslapd-cache-autosize: 10", "nsslapd-cache-autosize-split: 25", "systemctl start dirsrv.target", "ldapsearch -D \"cn=directory manager\" -w DirectoryManagerPassword -b \"cn=config,cn=ldbm database,cn=plugins,cn=config\" | grep nsslapd-cache-autosize nsslapd-cache-autosize: *10 nsslapd-cache-autosize-split: 25", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend suffix list dc=example,dc=com ( userroot )", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend suffix set --dncache-memsize= 20971520 userroot", "systemctl restart dirsrv.target", "ldapsearch -D \"cn=directory manager\" -w DirectoryManagerPassword -b \"cn= userroot ,cn=ldbm database,cn=plugins,cn=config\" | grep nsslapd-dncachememsize nsslapd-dncachememsize: 20971520", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-ndn-cache-enabled Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-ndn-cache-enabled: on", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-ndn-cache-enabled=on Enter password for cn=Directory Manager on ldap://server.example.com: Successfully replaced \"nsslapd-ndn-cache-enabled\"", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-ndn-cache-max-size Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-ndn-cache-max-size: 20971520", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-ndn-cache-max-size= 41943040", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-ndn-cache-max-size Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-ndn-cache-max-size: 41943040", "Incoming BER Element was too long, max allowable is 2097152 bytes. Change the nsslapd-maxbersize attribute in cn=config to increase.", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-maxbersize Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-maxbersize: 2097152", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-maxbersize= 4194304", "Enter password for cn=Directory Manager on ldap://server.example.com : Successfully replaced \"nsslapd-maxbersize\"", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-maxbersize Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-maxbersize: 4194304", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-maxdescriptors Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-maxdescriptors: 4096", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-maxdescriptors= 8192", "Enter password for cn=Directory Manager on ldap://server.example.com : Successfully replaced \"nsslapd-maxdescriptors\"", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-maxdescriptors Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-maxdescriptors: 8192", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-listen-backlog-size Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-listen-backlog-size: 128", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-listen-backlog-size= 192", "Enter password for cn=Directory Manager on ldap://server.example.com: Successfully replaced \"nsslapd-listen-backlog-size\"", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-listen-backlog-size Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-listen-backlog-size: 192", "libdb: Lock table is out of available locks", "ldapsearch -D \"cn=directory manager\" -w DirectoryManagerPassword -b \"cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config\" | grep nsslapd-db-locks nsslapd-db-locks: 50000", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend config set --locks= 100000", "Enter password for cn=Directory Manager on ldap://server.example.com : Successfully updated database configuration", "systemctl restart dirsrv.target", "ldapsearch -D \"cn=directory manager\" -w DirectoryManagerPassword -b \"cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config\" | grep nsslapd-db-locks nsslapd-db-locks: 100000", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-ioblocktimeout Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-ioblocktimeout: 10000", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-ioblocktimeout= 8000", "Enter password for cn=Directory Manager on ldap://server.example.com : Successfully replaced \"nsslapd-ioblocktimeout\"", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-ioblocktimeout Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-idletimeout: 8000", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-idletimeout Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-idletimeout: 3600", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-idletimeout= 1800", "Enter password for cn=Directory Manager on ldap://server.example.com : Successfully replaced \"nsslapd-idletimeout\"", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-idletimeout Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-idletimeout: 3600", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend suffix list cn=changelog (changelog) dc=example,dc=com (userroot) o=ipaca (ipaca)", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com replication set --suffix=\" dc=example,dc=com \" --repl-release-timeout= 90", "Enter password for cn=Directory Manager on ldap://server.example.com : Successfully replaced \"repl-release-timeout\"", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com replication set --suffix=\"o=ipaca\" --repl-release-timeout= 90 Enter password for cn=Directory Manager on ldap://server.example.com : Successfully replaced \"repl-release-timeout\"", "systemctl restart dirsrv.target", "ldapsearch -D \"cn=directory manager\" -w DirectoryManagerPassword -b \"cn=replica,cn= dc\\3Dexample\\2Cdc\\3Dcom ,cn=mapping tree,cn=config\" | grep nsds5ReplicaReleaseTimeout nsds5ReplicaReleaseTimeout: 90", "dn: cn=config changetype: modify replace: nsslapd-idletimeout nsslapd-idletimeout: 1800 - replace: nsslapd-maxdescriptors nsslapd-maxdescriptors: 8192", "ipa-server-install --dirsrv-config-file filename.ldif", "ipa-replica-install --dirsrv-config-file filename.ldif" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/tuning_performance_in_identity_management/adjusting-idm-directory-server-performance_tuning-performance-in-idm
Chapter 2. Considerations for implementing the Load-balancing service
Chapter 2. Considerations for implementing the Load-balancing service You must make several decisions when you plan to deploy the Red Hat OpenStack Platform (RHOSP) Load-balancing service (octavia) such as choosing which provider to use or whether to implement a highly available environment: Section 2.1, "Load-balancing service provider drivers" Section 2.2, "Load-balancing service (octavia) feature support matrix" Section 2.3, "Load-balancing service software requirements" Section 2.4, "Load-balancing service prerequisites for the undercloud" Section 2.5, "Basics of active-standby topology for Load-balancing service instances" Section 2.6, "Post-deployment steps for the Load-balancing service" 2.1. Load-balancing service provider drivers The Red Hat OpenStack Platform (RHOSP) Load-balancing service (octavia) supports enabling multiple provider drivers by using the Octavia v2 API. You can choose to use one provider driver, or multiple provider drivers simultaneously. RHOSP provides two load-balancing providers, amphora and Open Virtual Network (OVN). Amphora, the default, is a highly available load balancer with a feature set that scales with your compute environment. Because of this, amphora is suited for large-scale deployments. The OVN load-balancing provider is a lightweight load balancer with a basic feature set. OVN is typical for east-west, layer 4 network traffic. OVN provisions quickly and consumes fewer resources than a full-featured load-balancing provider such as amphora. On RHOSP deployments that use the neutron Modular Layer 2 plug-in with the OVN mechanism driver (ML2/OVN), RHOSP director automatically enables the OVN provider driver in the Load-balancing service without the need for additional installation or configuration. Important The information in this section applies only to the amphora load-balancing provider, unless indicated otherwise. Additional resources Section 2.2, "Load-balancing service (octavia) feature support matrix" 2.2. Load-balancing service (octavia) feature support matrix The Red Hat OpenStack Platform (RHOSP) Load-balancing service (octavia) provides two load-balancing providers, amphora and Open Virtual Network (OVN). Amphora is a full-featured load-balancing provider that requires a separate haproxy VM and an extra latency hop. OVN runs on every node and does not require a separate VM nor an extra hop. However, OVN has far fewer load-balancing features than amphora. The following table lists features in the Load-balancing service that Red Hat OpenStack Platform (RHOSP) 17.1 supports and in which maintenance release support for the feature began. Note If the feature is not listed, then RHOSP 17.1 does not support the feature. Table 2.1. Load-balancing service (octavia) feature support matrix Feature Support level in RHOSP 17.1 Amphora Provider OVN Provider ML2/OVS L3 HA Full support No support ML2/OVS DVR Full support No support ML2/OVS L3 HA + composable network node [1] Full support No support ML2/OVS DVR + composable network node [1] Full support No support ML2/OVN L3 HA Full support Full support ML2/OVN DVR Full support Full support DPDK No support No support SR-IOV No support No support Health monitors Full support No support Amphora active-standby Full support No support Terminated HTTPS load balancers (with barbican) Full support No support Amphora spare pool Technology Preview only No support UDP Full support Full support Backup members Technology Preview only No support Provider framework Technology Preview only No support TLS client authentication Technology Preview only No support TLS back end encryption Technology Preview only No support Octavia flavors Full support No support Object tags Full support No support Listener API timeouts Full support No support Log offloading Full support No support VIP access control list Full support No support Availabilty zones Full support No support Volume-based amphora No support No support [1] Network node with OVS, metadata, DHCP, L3, and Octavia (worker, health monitor, and housekeeping). Additional resources Section 2.1, "Load-balancing service provider drivers" 2.3. Load-balancing service software requirements The Red Hat OpenStack Platform (RHOSP) Load-balancing service (octavia) requires that you configure the following core OpenStack components: Compute (nova) OpenStack Networking (neutron) Image (glance) Identity (keystone) RabbitMQ MySQL 2.4. Load-balancing service prerequisites for the undercloud The Red Hat OpenStack Platform (RHOSP) Load-balancing service (octavia) has the following requirements for the RHOSP undercloud: A successful undercloud installation. The Load-balancing service present on the undercloud. A container-based overcloud deployment plan. Load-balancing service components configured on your Controller nodes. Important If you want to enable the Load-balancing service on an existing overcloud deployment, you must prepare the undercloud. Failure to do so results in the overcloud installation being reported as successful yet without the Load-balancing service running. 2.5. Basics of active-standby topology for Load-balancing service instances When you deploy the Red Hat OpenStack Platform (RHOSP) Load-balancing service (octavia), you can decide whether, by default, load balancers are highly available when users create them. If you want to give users a choice, then after RHOSP deployment, create a Load-balancing service flavor for creating highly available load balancers and a flavor for creating standalone load balancers. By default, the amphora provider driver is configured for a single Load-balancing service (amphora) instance topology with limited support for high availability (HA). However, you can make Load-balancing service instances highly available when you implement an active-standby topology. In this topology, the Load-balancing service boots an active and standby instance for each load balancer, and maintains session persistence between each. If the active instance becomes unhealthy, the instance automatically fails over to the standby instance, making it active. The Load-balancing service health manager automatically rebuilds an instance that fails. Additional resources Section 4.2, "Enabling active-standby topology for Load-balancing service instances" 2.6. Post-deployment steps for the Load-balancing service Red Hat OpenStack Platform (RHOSP) provides a workflow task to simplify the post-deployment steps for the Load-balancing service (octavia). This workflow runs a set of Ansible playbooks to provide the following post-deployment steps as the last phase of the overcloud deployment: Configure certificates and keys. Configure the load-balancing management network between the amphorae and the Load-balancing service Controller worker and health manager. Amphora image On pre-provisioned servers, you must install the amphora image on the undercloud before you deploy the Load-balancing service: On servers that are not pre-provisioned, RHOSP director automatically downloads the default amphora image, uploads it to the overcloud Image service (glance), and then configures the Load-balancing service to use this amphora image. During a stack update or upgrade, director updates this image to the latest amphora image. Note Custom amphora images are not supported. Additional resources Section 4.1, "Deploying the Load-balancing service"
[ "sudo dnf install octavia-amphora-image-x86_64.noarch" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_load_balancing_as_a_service/plan-lb-service_rhosp-lbaas
Chapter 13. What huge pages do and how they are consumed by applications
Chapter 13. What huge pages do and how they are consumed by applications 13.1. What huge pages do Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size. A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. To use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP. In OpenShift Container Platform, applications in a pod can allocate and consume pre-allocated huge pages. 13.2. How huge pages are consumed by apps Nodes must pre-allocate huge pages in order for the node to report its huge page capacity. A node can only pre-allocate huge pages for a single size. Huge pages can be consumed through container-level resource requirements using the resource name hugepages-<size> , where size is the most compact binary notation using integer values supported on a particular node. For example, if a node supports 2048KiB page sizes, it exposes a schedulable resource hugepages-2Mi . Unlike CPU or memory, huge pages do not support over-commitment. apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: "1Gi" cpu: "1" volumes: - name: hugepage emptyDir: medium: HugePages 1 Specify the amount of memory for hugepages as the exact amount to be allocated. Do not specify this value as the amount of memory for hugepages multiplied by the size of the page. For example, given a huge page size of 2MB, if you want to use 100MB of huge-page-backed RAM for your application, then you would allocate 50 huge pages. OpenShift Container Platform handles the math for you. As in the above example, you can specify 100MB directly. Allocating huge pages of a specific size Some platforms support multiple huge page sizes. To allocate huge pages of a specific size, precede the huge pages boot command parameters with a huge page size selection parameter hugepagesz=<size> . The <size> value must be specified in bytes with an optional scale suffix [ kKmMgG ]. The default huge page size can be defined with the default_hugepagesz=<size> boot parameter. Huge page requirements Huge page requests must equal the limits. This is the default if limits are specified, but requests are not. Huge pages are isolated at a pod scope. Container isolation is planned in a future iteration. EmptyDir volumes backed by huge pages must not consume more huge page memory than the pod request. Applications that consume huge pages via shmget() with SHM_HUGETLB must run with a supplemental group that matches proc/sys/vm/hugetlb_shm_group . 13.3. Consuming huge pages resources using the Downward API You can use the Downward API to inject information about the huge pages resources that are consumed by a container. You can inject the resource allocation as environment variables, a volume plugin, or both. Applications that you develop and run in the container can determine the resources that are available by reading the environment variables or files in the specified volumes. Procedure Create a hugepages-volume-pod.yaml file that is similar to the following example: apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- labels: app: hugepages-example spec: containers: - securityContext: capabilities: add: [ "IPC_LOCK" ] image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage - mountPath: /etc/podinfo name: podinfo resources: limits: hugepages-1Gi: 2Gi memory: "1Gi" cpu: "1" requests: hugepages-1Gi: 2Gi env: - name: REQUESTS_HUGEPAGES_1GI <.> valueFrom: resourceFieldRef: containerName: example resource: requests.hugepages-1Gi volumes: - name: hugepage emptyDir: medium: HugePages - name: podinfo downwardAPI: items: - path: "hugepages_1G_request" <.> resourceFieldRef: containerName: example resource: requests.hugepages-1Gi divisor: 1Gi <.> Specifies to read the resource use from requests.hugepages-1Gi and expose the value as the REQUESTS_HUGEPAGES_1GI environment variable. <.> Specifies to read the resource use from requests.hugepages-1Gi and expose the value as the file /etc/podinfo/hugepages_1G_request . Create the pod from the hugepages-volume-pod.yaml file: USD oc create -f hugepages-volume-pod.yaml Verification Check the value of the REQUESTS_HUGEPAGES_1GI environment variable: USD oc exec -it USD(oc get pods -l app=hugepages-example -o jsonpath='{.items[0].metadata.name}') \ -- env | grep REQUESTS_HUGEPAGES_1GI Example output REQUESTS_HUGEPAGES_1GI=2147483648 Check the value of the /etc/podinfo/hugepages_1G_request file: USD oc exec -it USD(oc get pods -l app=hugepages-example -o jsonpath='{.items[0].metadata.name}') \ -- cat /etc/podinfo/hugepages_1G_request Example output 2 Additional resources Allowing containers to consume Downward API objects 13.4. Configuring huge pages Nodes must pre-allocate huge pages used in an OpenShift Container Platform cluster. There are two ways of reserving huge pages: at boot time and at run time. Reserving at boot time increases the possibility of success because the memory has not yet been significantly fragmented. The Node Tuning Operator currently supports boot time allocation of huge pages on specific nodes. 13.4.1. At boot time Procedure To minimize node reboots, the order of the steps below needs to be followed: Label all nodes that need the same huge pages setting by a label. USD oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp= Create a file with the following content and name it hugepages-tuned-boottime.yaml : apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: "worker-hp" priority: 30 profile: openshift-node-hugepages 1 Set the name of the Tuned resource to hugepages . 2 Set the profile section to allocate huge pages. 3 Note the order of parameters is important as some platforms support huge pages of various sizes. 4 Enable machine config pool based matching. Create the Tuned hugepages object USD oc create -f hugepages-tuned-boottime.yaml Create a file with the following content and name it hugepages-mcp.yaml : apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: "" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: "" Create the machine config pool: USD oc create -f hugepages-mcp.yaml Given enough non-fragmented memory, all the nodes in the worker-hp machine config pool should now have 50 2Mi huge pages allocated. USD oc get node <node_using_hugepages> -o jsonpath="{.status.allocatable.hugepages-2Mi}" 100Mi Note The TuneD bootloader plugin only supports Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. 13.5. Disabling Transparent Huge Pages Transparent Huge Pages (THP) attempt to automate most aspects of creating, managing, and using huge pages. Since THP automatically manages the huge pages, this is not always handled optimally for all types of workloads. THP can lead to performance regressions, since many applications handle huge pages on their own. Therefore, consider disabling THP. The following steps describe how to disable THP using the Node Tuning Operator (NTO). Procedure Create a file with the following content and name it thp-disable-tuned.yaml : apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: thp-workers-profile namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom tuned profile for OpenShift to turn off THP on worker nodes include=openshift-node [vm] transparent_hugepages=never name: openshift-thp-never-worker recommend: - match: - label: node-role.kubernetes.io/worker priority: 25 profile: openshift-thp-never-worker Create the Tuned object: USD oc create -f thp-disable-tuned.yaml Check the list of active profiles: USD oc get profile -n openshift-cluster-node-tuning-operator Verification Log in to one of the nodes and do a regular THP check to verify if the nodes applied the profile successfully: USD cat /sys/kernel/mm/transparent_hugepage/enabled Example output always madvise [never]
[ "apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: \"1Gi\" cpu: \"1\" volumes: - name: hugepage emptyDir: medium: HugePages", "apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- labels: app: hugepages-example spec: containers: - securityContext: capabilities: add: [ \"IPC_LOCK\" ] image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage - mountPath: /etc/podinfo name: podinfo resources: limits: hugepages-1Gi: 2Gi memory: \"1Gi\" cpu: \"1\" requests: hugepages-1Gi: 2Gi env: - name: REQUESTS_HUGEPAGES_1GI <.> valueFrom: resourceFieldRef: containerName: example resource: requests.hugepages-1Gi volumes: - name: hugepage emptyDir: medium: HugePages - name: podinfo downwardAPI: items: - path: \"hugepages_1G_request\" <.> resourceFieldRef: containerName: example resource: requests.hugepages-1Gi divisor: 1Gi", "oc create -f hugepages-volume-pod.yaml", "oc exec -it USD(oc get pods -l app=hugepages-example -o jsonpath='{.items[0].metadata.name}') -- env | grep REQUESTS_HUGEPAGES_1GI", "REQUESTS_HUGEPAGES_1GI=2147483648", "oc exec -it USD(oc get pods -l app=hugepages-example -o jsonpath='{.items[0].metadata.name}') -- cat /etc/podinfo/hugepages_1G_request", "2", "oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp=", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: \"worker-hp\" priority: 30 profile: openshift-node-hugepages", "oc create -f hugepages-tuned-boottime.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: \"\" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: \"\"", "oc create -f hugepages-mcp.yaml", "oc get node <node_using_hugepages> -o jsonpath=\"{.status.allocatable.hugepages-2Mi}\" 100Mi", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: thp-workers-profile namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom tuned profile for OpenShift to turn off THP on worker nodes include=openshift-node [vm] transparent_hugepages=never name: openshift-thp-never-worker recommend: - match: - label: node-role.kubernetes.io/worker priority: 25 profile: openshift-thp-never-worker", "oc create -f thp-disable-tuned.yaml", "oc get profile -n openshift-cluster-node-tuning-operator", "cat /sys/kernel/mm/transparent_hugepage/enabled", "always madvise [never]" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/scalability_and_performance/what-huge-pages-do-and-how-they-are-consumed
31.4. Unloading a Module
31.4. Unloading a Module You can unload a kernel module by running modprobe -r <module_name> as root. For example, assuming that the wacom module is already loaded into the kernel, you can unload it by running: However, this command will fail if a process is using: the wacom module, a module that wacom directly depends on, or, any module that wacom -through the dependency tree-depends on indirectly. See Section 31.1, "Listing Currently-Loaded Modules" for more information about using lsmod to obtain the names of the modules which are preventing you from unloading a certain module. For example, if you want to unload the firewire_ohci module (because you believe there is a bug in it that is affecting system stability, for example), your terminal session might look similar to this: You have figured out the dependency tree (which does not branch in this example) for the loaded Firewire modules: firewire_ohci depends on firewire_core , which itself depends on crc-itu-t . You can unload firewire_ohci using the modprobe -v -r <module_name> command, where -r is short for --remove and -v for --verbose : The output shows that modules are unloaded in the reverse order that they are loaded, given that no processes depend on any of the modules being unloaded. Important Although the rmmod command can be used to unload kernel modules, it is recommended to use modprobe -r instead.
[ "~]# modprobe -r wacom", "~]# modinfo -F depends firewire_ohci depends: firewire-core ~]# modinfo -F depends firewire_core depends: crc-itu-t ~]# modinfo -F depends crc-itu-t depends:", "~]# modprobe -r -v firewire_ohci rmmod /lib/modules/2.6.32-71.el6.x86_64/kernel/drivers/firewire/firewire-ohci.ko rmmod /lib/modules/2.6.32-71.el6.x86_64/kernel/drivers/firewire/firewire-core.ko rmmod /lib/modules/2.6.32-71.el6.x86_64/kernel/lib/crc-itu-t.ko" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Unloading_a_Module
Chapter 19. Searching and bookmarking
Chapter 19. Searching and bookmarking Satellite features powerful search functionality on most pages of the Satellite web UI. It enables you to search all kinds of resources that Satellite manages. Searches accept both free text and syntax-based queries, which can be built using extensive input prediction. Search queries can be saved as bookmarks for future reuse. 19.1. Building search queries As you start typing a search query, a list of valid options to complete the current part of the query appears. You can either select an option from the list and keep building the query using the prediction, or continue typing. To learn how free text is interpreted by the search engine, see Section 19.2, "Using free text search" . 19.1.1. Query syntax Available fields, resources to search, and the way the query is interpreted all depend on context, that is, the page where you perform the search. For example, the field "hostgroup" on the Hosts page is equivalent to the field "name" on the Host Groups page. The field type also determines available operators and accepted values. For a list of all operators, see Operators . For descriptions of value formats, see Values . 19.1.2. Query operators All operators that can be used between parameter and value are listed in the following table. Other symbols and special characters that might appear in a prediction-built query, such as colons, do not have special meaning and are treated as free text. Table 19.1. Comparison operators accepted by search Operator Short Name Description Example = EQUALS Accepts numerical, temporal, or text values. For text, exact case sensitive matches are returned. hostgroup = RHEL7 != NOT EQUALS ~ LIKE Accepts text or temporal values. Returns case insensitive matches. Accepts the following wildcards: _ for a single character, % or * for any number of characters including zero. If no wildcard is specified, the string is treated as if surrounded by wildcards: %rhel7% hostgroup ~ rhel% !~ NOT LIKE > GREATER THAN Accepts numerical or temporal values. For temporal values, the operator > is interpreted as "later than", and < as "earlier than". Both operators can be combined with EQUALS: >= <= registered_at > 10-January-2017 The search will return hosts that have been registered after the given date, that is, between 10th January 2017 and now. registered_at <= Yesterday The search will return hosts that have been registered yesterday or earlier. < LESS THAN ^ IN Compares an expression against a list of values, as in SQL. Returns matches that contain or not contain the values, respectively. release_version !^ 7 !^ NOT IN HAS or set? Returns values that are present or not present, respectively. has hostgroup or set? hostgroup On the Puppet Classes page, the search will return classes that are assigned to at least one host group. not has hostgroup or null? hostgroup On the Dashboard with an overview of hosts, the search will return all hosts that have no assigned host group. NOT HAS or null? Simple queries that follow the described syntax can be combined into more complex ones using logical operators AND, OR, and NOT. Alternative notations of the operators are also accepted: Table 19.2. Logical operators accepted by search Operator Alternative Notations Example and & && <whitespace> class = motd AND environment ~ production or | || errata_status = errata_needed || errata_status = security_needed not - ! hostgroup ~ rhel7 not status.failed 19.1.3. Query values Text Values Text containing whitespaces must be enclosed in quotes. A whitespace is otherwise interpreted as the AND operator. Examples: hostgroup = "Web servers" The search will return hosts with assigned host group named "Web servers". hostgroup = Web servers The search will return hosts in the host group Web with any field matching %servers%. Temporal Values Many date and time formats are accepted, including the following: "10 January 2017" "10 Jan 2017" 10-January-2017 10/January/2017 "January 10, 2017" Today, Yesterday, and the like. Warning Avoid ambiguous date formats, such as 02/10/2017 or 10-02-2017. 19.2. Using free text search When you enter free text, it will be searched for across multiple fields. For example, if you type "64", the search will return all hosts that have that number in their name, IP address, MAC address, and architecture. Note Multi-word queries must be enclosed in quotes, otherwise the whitespace is interpreted as the AND operator. Because of searching across all fields, free text search results are not very accurate and searching can be slow, especially on a large number of hosts. For this reason, we recommend that you avoid free text and use more specific, syntax-based queries whenever possible. 19.3. Managing bookmarks You can save search queries as bookmarks for reuse. You can also delete or modify a bookmark. Bookmarks appear only on the page on which they were created. On some pages, there are default bookmarks available for the common searches, for example, all active or disabled hosts. 19.3.1. Creating bookmarks This section details how to save a search query as a bookmark. You must save the search query on the relevant page to create a bookmark for that page, for example, saving a host related search query on the Hosts page. Procedure In the Satellite web UI, navigate to the page where you want to create a bookmark. In the Search field, enter the search query you want to save. Select the arrow to the right of the Search button and then select Bookmark this search . In the Name field, enter a name for the new bookmark. In the Search query field, ensure your search query is correct. Ensure the Public checkbox is set correctly: Select the Public checkbox to set the bookmark as public and visible to all users. Clear the Public checkbox to set the bookmark as private and only visible to the user who created it. Click Submit . To confirm the creation, either select the arrow to the right of the Search button to display the list of bookmarks, or navigate to Administer > Bookmarks and then check the Bookmarks list for the name of the bookmark. 19.3.2. Deleting bookmarks You can delete bookmarks on the Bookmarks page. Procedure In the Satellite web UI, navigate to Administer > Bookmarks . On the Bookmarks page, click Delete for the Bookmark you want to delete. When the confirmation window opens, click OK to confirm the deletion. To confirm the deletion, check the Bookmarks list for the name of the bookmark. 19.4. Using keyboard shortcuts You can use keyboard shortcuts to quickly focus search bars. To focus the vertical navigation search bar, press Ctrl + Shift + F . To focus the page search bar, press / .
[ "parameter operator value" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/administering_red_hat_satellite/searching_and_bookmarking_admin
Chapter 9. TokenReview [authentication.k8s.io/v1]
Chapter 9. TokenReview [authentication.k8s.io/v1] Description TokenReview attempts to authenticate a token to a known user. Note: TokenReview requests may be cached by the webhook token authenticator plugin in the kube-apiserver. Type object Required spec 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object TokenReviewSpec is a description of the token authentication request. status object TokenReviewStatus is the result of the token authentication request. 9.1.1. .spec Description TokenReviewSpec is a description of the token authentication request. Type object Property Type Description audiences array (string) Audiences is a list of the identifiers that the resource server presented with the token identifies as. Audience-aware token authenticators will verify that the token was intended for at least one of the audiences in this list. If no audiences are provided, the audience will default to the audience of the Kubernetes apiserver. token string Token is the opaque bearer token. 9.1.2. .status Description TokenReviewStatus is the result of the token authentication request. Type object Property Type Description audiences array (string) Audiences are audience identifiers chosen by the authenticator that are compatible with both the TokenReview and token. An identifier is any identifier in the intersection of the TokenReviewSpec audiences and the token's audiences. A client of the TokenReview API that sets the spec.audiences field should validate that a compatible audience identifier is returned in the status.audiences field to ensure that the TokenReview server is audience aware. If a TokenReview returns an empty status.audience field where status.authenticated is "true", the token is valid against the audience of the Kubernetes API server. authenticated boolean Authenticated indicates that the token was associated with a known user. error string Error indicates that the token couldn't be checked user object UserInfo holds the information about the user needed to implement the user.Info interface. 9.1.3. .status.user Description UserInfo holds the information about the user needed to implement the user.Info interface. Type object Property Type Description extra object Any additional information provided by the authenticator. extra{} array (string) groups array (string) The names of groups this user is a part of. uid string A unique value that identifies this user across time. If this user is deleted and another user by the same name is added, they will have different UIDs. username string The name that uniquely identifies this user among all active users. 9.1.4. .status.user.extra Description Any additional information provided by the authenticator. Type object 9.2. API endpoints The following API endpoints are available: /apis/oauth.openshift.io/v1/tokenreviews POST : create a TokenReview /apis/authentication.k8s.io/v1/tokenreviews POST : create a TokenReview 9.2.1. /apis/oauth.openshift.io/v1/tokenreviews Table 9.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a TokenReview Table 9.2. Body parameters Parameter Type Description body TokenReview schema Table 9.3. HTTP responses HTTP code Reponse body 200 - OK TokenReview schema 201 - Created TokenReview schema 202 - Accepted TokenReview schema 401 - Unauthorized Empty 9.2.2. /apis/authentication.k8s.io/v1/tokenreviews Table 9.4. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a TokenReview Table 9.5. Body parameters Parameter Type Description body TokenReview schema Table 9.6. HTTP responses HTTP code Reponse body 200 - OK TokenReview schema 201 - Created TokenReview schema 202 - Accepted TokenReview schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/authorization_apis/tokenreview-authentication-k8s-io-v1
Chapter 2. Understanding build configurations
Chapter 2. Understanding build configurations The following sections define the concept of a build, build configuration, and outline the primary build strategies available. 2.1. BuildConfigs A build configuration describes a single build definition and a set of triggers for when a new build is created. Build configurations are defined by a BuildConfig , which is a REST object that can be used in a POST to the API server to create a new instance. A build configuration, or BuildConfig , is characterized by a build strategy and one or more sources. The strategy determines the process, while the sources provide its input. Depending on how you choose to create your application using Red Hat OpenShift Service on AWS, a BuildConfig is typically generated automatically for you if you use the web console or CLI, and it can be edited at any time. Understanding the parts that make up a BuildConfig and their available options can help if you choose to manually change your configuration later. The following example BuildConfig results in a new build every time a container image tag or the source code changes: BuildConfig object definition kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: "ruby-sample-build" 1 spec: runPolicy: "Serial" 2 triggers: 3 - type: "GitHub" github: secret: "secret101" - type: "Generic" generic: secret: "secret101" - type: "ImageChange" source: 4 git: uri: "https://github.com/openshift/ruby-hello-world" strategy: 5 sourceStrategy: from: kind: "ImageStreamTag" name: "ruby-20-centos7:latest" output: 6 to: kind: "ImageStreamTag" name: "origin-ruby-sample:latest" postCommit: 7 script: "bundle exec rake test" 1 This specification creates a new BuildConfig named ruby-sample-build . 2 The runPolicy field controls whether builds created from this build configuration can be run simultaneously. The default value is Serial , which means new builds run sequentially, not simultaneously. 3 You can specify a list of triggers, which cause a new build to be created. 4 The source section defines the source of the build. The source type determines the primary source of input, and can be either Git , to point to a code repository location, Dockerfile , to build from an inline Dockerfile, or Binary , to accept binary payloads. It is possible to have multiple sources at once. See the documentation for each source type for details. 5 The strategy section describes the build strategy used to execute the build. You can specify a Source , Docker , or Custom strategy here. This example uses the ruby-20-centos7 container image that Source-to-image (S2I) uses for the application build. 6 After the container image is successfully built, it is pushed into the repository described in the output section. 7 The postCommit section defines an optional build hook.
[ "kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: \"ruby-sample-build\" 1 spec: runPolicy: \"Serial\" 2 triggers: 3 - type: \"GitHub\" github: secret: \"secret101\" - type: \"Generic\" generic: secret: \"secret101\" - type: \"ImageChange\" source: 4 git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: 5 sourceStrategy: from: kind: \"ImageStreamTag\" name: \"ruby-20-centos7:latest\" output: 6 to: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" postCommit: 7 script: \"bundle exec rake test\"" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/builds_using_buildconfig/understanding-buildconfigs
Chapter 1. Red Hat build of OpenJDK applications in containers
Chapter 1. Red Hat build of OpenJDK applications in containers Red Hat build of OpenJDK images have default startup scripts that automatically detect application JAR files and launch Java. The script's behavior can be customized using environment variables. For more information, see /help.md in the container. The Java applications in the /deployments directory of the OpenJDK image are run when the image loads. Note Containers that contain Red Hat build of OpenJDK applications are not automatically updated with security updates. Ensure that you update these images at least once every three months. Application JAR files can be fat JARs or thin JARs. Fat JARs contain all of the application's dependencies. Thin JARs reference other JARs that contain some, or all, of the application's dependencies. Thin JARs are only supported if: They have a flat classpath. All dependencies are JARs that are in the /deployments directory.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/packaging_red_hat_build_of_openjdk_21_applications_in_containers/openjdk-apps-in-containers
8.166. pango
8.166. pango 8.166.1. RHBA-2014:0585 - pango bug fix update Updated pango packages that fix two bugs are now available for Red Hat Enterprise Linux 6. Pango is a library for laying out and rendering of text, with an emphasis on internationalization. Pango forms the core of text and font handling for the GTK+ widget toolkit. Bug Fixes BZ# 885846 Prior to this update, the Pango library used an incorrect macro for specifying a location of its man pages. Consequently, after installing the pango packages, the man pages were placed in the wrong directory. This update fixes the relevant macro in the Pango spec file and the man pages are now located in the correct directory. BZ# 1086690 Previously, the pango RPM scriptlet did not mask harmless error messages. As a consequence, although the migration was successful, the scriptlet printed error messages related to missing directories after an upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7. This update determines the location of the directory with the cache file, and these harmless error messages no longer appear. Users of pango are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/pango
Chapter 60. Storage
Chapter 60. Storage LVM does not support event-based autoactivation of incomplete volume groups If a volume group is not complete and physical volumes are missing, LVM does not support automatic LVM event-based activation of that volume group. This implies a setting of --activationmode complete whenever autoactivation takes place. For information on the --activationmode complete option and automatic activation, see the vgchange(8) and pvscan(8) man pages. Note that the event-driven autoactivation hooks are enabled when lvmetad is enabled with the global/use_lvmetad=1 setting in the /etc/lvm/lvm.conf configuration file. Also note that without autoactivation, there is a direct activation hook at the exact time during boot at which the volume groups are activated with only the physical volumes that are available at that time. Any physical volumes that appear later are not taken into account. This issue does not affect early boot in initramfs ( dracut ) nor does this affect direct activation from the command line using vgchange and lvchange calls, which default to degraded activation mode. (BZ# 1337220 ) The vdo service is disabled after upgrading to Red Hat Enterprise Linux 7.6 Upgrading from Red Hat Enterprise Linux 7.5 to 7.6 disables the vdo service if it was previously enabled. This is because of missing systemd macros in the vdo RPM package. The problem has been fixed in the 7.6 release, and upgrading from Red Hat Enterprise Linux 7.6 to a later release will no longer disable vdo . (BZ#1617896) Data corruption occurs on RAID 10 reshape on top of VDO. RAID 10 reshape (with both LVM and mdadm ) on top of VDO corrupts data. Stacking RAID 10 (or other RAID types) on top of VDO does not take advantage of the deduplication and compression capabilities of VDO and is not recommended. (BZ# 1528466 , BZ#1530776) System boot is sometimes delayed by ndctl A udev rule installed by the ndctl package sometimes delays the system boot process for several minutes on systems with Non-Volatile Dual In-line Memory Module (NVDIMM) devices. In such cases, systemd displays a message similar to the following: To work around the issue, disable the udev rule using the following command: After disabling the udev rule, the described problem no longer occurs. (BZ#1635441) LVM might cause data corruption in the first 128kB of allocatable space of a physical volume A bug in the I/O layer of LVM causes LVM to read and write back the first 128kB of data that immediately follows the LVM metadata on the disk. If another program or the file system is modifying these blocks when you use an LVM command, changes might be lost. As a consequence, this might lead to data corruption in rare cases. To work around this problem, avoid using LVM commands that change volume group (VG) metadata, such as lvcreate or lvextend , while logical volumes (LVs) in the VG are in use. (BZ# 1643651 )
[ "INFO: task systemd-udevd:1554 blocked for more than 120 seconds. nvdimm_bus_check_dimm_count+0x31/0xa0 [libnvdimm]", "rm /usr/lib/udev/rules.d/80-ndctl.rules" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/known_issues_storage
Chapter 1. Ceph RESTful API
Chapter 1. Ceph RESTful API As a storage administrator, you can use the Ceph RESTful API, or simply the Ceph API, provided by the Red Hat Ceph Storage Dashboard to interact with the Red Hat Ceph Storage cluster. You can display information about the Ceph Monitors and OSDs, along with their respective configuration options. You can even create or edit Ceph pools. The Ceph API uses the following standards: HTTP 1.1 JSON MIME and HTTP Content Negotiation JWT These standards are OpenAPI 3.0 compliant, regulating the API syntax, semantics, content encoding, versioning, authentication, and authorization. Prerequisites A healthy running Red Hat Ceph Storage cluster. Access to the node running the Ceph Manager. 1.1. Versioning for the Ceph API A main goal for the Ceph RESTful API, is to provide a stable interface. To achieve a stable interface, the Ceph API is built on the following principles: A mandatory explicit default version for all endpoints to avoid implicit defaults. Fine-grain change control per-endpoint. The expected version from a specific endpoint is stated in the HTTP header. Syntax Example If the current Ceph API server is not able to address that specific version, a 415 - Unsupported Media Type response will be returned. Using semantic versioning. Major changes are backwards incompatible. Changes might result in non-additive changes to the request, and to the response formats for a specific endpoint. Minor changes are backwards and forwards compatible. Changes consist of additive changes to the request or response formats for a specific endpoint. 1.2. Authentication and authorization for the Ceph API Access to the Ceph RESTful API goes through two checkpoints. The first is authenticating that the request is done on the behalf of a valid, and existing user. Secondly, is authorizing the previously authenticated user can do a specific action, such as creating, reading, updating, or deleting, on the target end point. Before users start using the Ceph API, they need a valid JSON Web Token (JWT). The /api/auth endpoint allows you to retrieve this token. Example This token must be used together with every API request by placing it within the Authorization HTTP header. Syntax Additional Resources See the Ceph user management chapter in the Red Hat Ceph Storage Administration Guide for more details. 1.3. Enabling and Securing the Ceph API module The Red Hat Ceph Storage Dashboard module offers the RESTful API access to the storage cluster over an SSL-secured connection. Important If disabling SSL, then user names and passwords are sent unencrypted to the Red Hat Ceph Storage Dashboard. Prerequisites Root-level access to a Ceph Monitor node. Ensure that you have at least one ceph-mgr daemon active. If you use a firewall, ensure that TCP port 8443 , for SSL, and TCP port 8080 , without SSL, are open on the node with the active ceph-mgr daemon. Procedure Log into the Cephadm shell: Example Enable the RESTful plug-in: Configure an SSL certificate. If your organization's certificate authority (CA) provides a certificate, then set using the certificate files: Syntax Example If you want to set unique node-based certificates, then add a HOST_NAME to the commands: Example Alternatively, you can generate a self-signed certificate. However, using a self-signed certificate does not provide full security benefits of the HTTPS protocol: Warning Most modern web browsers will complain about self-signed certificates, which require you to confirm before establishing a secure connection. Create a user, set the password, and set the role: Syntax Example This example creates a user named user1 with the administrator role. Connect to the RESTful plug-in web page. Open a web browser and enter the following URL: Syntax Example If you used a self-signed certificate, confirm a security exception. Additional Resources The ceph dashboard --help command. The https:// HOST_NAME :8443/doc page, where HOST_NAME is the IP address or name of the node with the running ceph-mgr instance. For more information, see the Security Hardening guide within the Product Documentation for Red Hat Enterprise Linux for your OS version, on the Red Hat Customer Portal. 1.4. Questions and Answers 1.4.1. Getting information This section describes how to use the Ceph API to view information about the storage cluster, Ceph Monitors, OSDs, pools, and hosts. 1.4.1.1. How Can I View All Cluster Configuration Options? This section describes how to use the RESTful plug-in to view cluster configuration options and their values. The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance CEPH_MANAGER_PORT with the TCP port number. The default TCP port number is 8443. Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user name and password when prompted. Additional Resources The Configuration Guide for Red Hat Ceph Storage 7 1.4.1.2. How Can I View a Particular Cluster Configuration Option? This section describes how to view a particular cluster option and its value. The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ARGUMENT with the configuration option you want to view Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ARGUMENT with the configuration option you want to view USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ARGUMENT with the configuration option you want to view Enter the user name and password when prompted. Additional Resources The Configuration Guide for Red Hat Ceph Storage 7 1.4.1.3. How Can I View All Configuration Options for OSDs? This section describes how to view all configuration options and their values for OSDs. The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user name and password when prompted. Additional Resources The Configuration Guide for Red Hat Ceph Storage 7 1.4.1.4. How Can I View CRUSH Rules? This section describes how to view CRUSH rules. The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user name and password when prompted. Additional Resources The CRUSH Rules section in the Administration Guide for Red Hat Ceph Storage 7. 1.4.1.5. How Can I View Information about Monitors? This section describes how to view information about a particular Monitor, such as: IP address Name Quorum status The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user name and password when prompted. 1.4.1.6. How Can I View Information About a Particular Monitor? This section describes how to view information about a particular Monitor, such as: IP address Name Quorum status The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance NAME with the short host name of the Monitor Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance NAME with the short host name of the Monitor USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance NAME with the short host name of the Monitor Enter the user name and password when prompted. 1.4.1.7. How Can I View Information about OSDs? This section describes how to view information about OSDs, such as: IP address Its pools Affinity Weight The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user name and password when prompted. 1.4.1.8. How Can I View Information about a Particular OSD? This section describes how to view information about a particular OSD, such as: IP address Its pools Affinity Weight The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field Enter the user name and password when prompted. 1.4.1.9. How Can I Determine What Processes Can Be Scheduled on an OSD? This section describes how to use the RESTful plug-in to view what processes, such as scrubbing or deep scrubbing, can be scheduled on an OSD. The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field Enter the user name and password when prompted. 1.4.1.10. How Can I View Information About Pools? This section describes how to view information about pools, such as: Flags Size Number of placement groups The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user name and password when prompted. 1.4.1.11. How Can I View Information About a Particular Pool? This section describes how to view information about a particular pool, such as: Flags Size Number of placement groups The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the pool listed in the pool field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the pool listed in the pool field USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the pool listed in the pool field Enter the user name and password when prompted. 1.4.1.12. How Can I View Information About Hosts? This section describes how to view information about hosts, such as: Host names Ceph daemons and their IDs Ceph version The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user name and password when prompted. 1.4.1.13. How Can I View Information About a Particular Host? This section describes how to view information about a particular host, such as: Host names Ceph daemons and their IDs Ceph version The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance HOST_NAME with the host name of the host listed in the hostname field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance HOST_NAME with the host name of the host listed in the hostname field USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance HOST_NAME with the host name of the host listed in the hostname field Enter the user name and password when prompted. 1.4.2. Changing Configuration This section describes how to use the Ceph API to change OSD configuration options, the state of an OSD, and information about pools. 1.4.2.1. How Can I Change OSD Configuration Options? This section describes how to use the RESTful plug-in to change OSD configuration options. The curl Command On the command line, use: Replace: OPTION with the option to modify; pause , noup , nodown , noout , noin , nobackfill , norecover , noscrub , nodeep-scrub VALUE with true or false USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance OPTION with the option to modify; pause , noup , nodown , noout , noin , nobackfill , norecover , noscrub , nodeep-scrub VALUE with True or False USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: 1.4.2.2. How Can I Change the OSD State? This section describes how to use the RESTful plug-in to change the state of an OSD. The curl Command On the command line, use: Replace: STATE with the state to change ( in or up ) VALUE with true or false USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field STATE with the state to change ( in or up ) VALUE with True or False USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: 1.4.2.3. How Can I Reweight an OSD? This section describes how to change the weight of an OSD. The curl Command On the command line, use: Replace: VALUE with the new weight USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field VALUE with the new weight USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: 1.4.2.4. How Can I Change Information for a Pool? This section describes how to use the RESTful plug-in to change information for a particular pool. The curl Command On the command line, use: Replace: OPTION with the option to modify VALUE with the new value of the option USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the pool listed in the pool field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the pool listed in the pool field OPTION with the option to modify VALUE with the new value of the option USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: 1.4.3. Administering the Cluster This section describes how to use the Ceph API to initialize scrubbing or deep scrubbing on an OSD, create a pool or remove data from a pool, remove requests, or create a request. 1.4.3.1. How Can I Run a Scheduled Process on an OSD? This section describes how to use the RESTful API to run scheduled processes, such as scrubbing or deep scrubbing, on an OSD. The curl Command On the command line, use: Replace: COMMAND with the process ( scrub , deep-scrub , or repair ) you want to start. Verify it the process is supported on the OSD. See Section 1.4.1.9, "How Can I Determine What Processes Can Be Scheduled on an OSD?" for details. USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field COMMAND with the process ( scrub , deep-scrub , or repair ) you want to start. Verify it the process is supported on the OSD. See Section 1.4.1.9, "How Can I Determine What Processes Can Be Scheduled on an OSD?" for details. USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: 1.4.3.2. How Can I Create a New Pool? This section describes how to use the RESTful plug-in to create a new pool. The curl Command On the command line, use: Replace: NAME with the name of the new pool NUMBER with the number of the placement groups USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance NAME with the name of the new pool NUMBER with the number of the placement groups USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: 1.4.3.3. How Can I Remove Pools? This section describes how to use the RESTful plug-in to remove a pool. This request is by default forbidden. To allow it, add the following parameter to the Ceph configuration guide. The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the pool listed in the pool field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the pool listed in the pool field USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option:
[ "Accept: application/vnd.ceph.api.v MAJOR . MINOR +json", "Accept: application/vnd.ceph.api.v1.0+json", "curl -X POST \"https://example.com:8443/api/auth\" -H \"Accept: application/vnd.ceph.api.v1.0+json\" -H \"Content-Type: application/json\" -d '{\"username\": user1, \"password\": password1}'", "curl -H \"Authorization: Bearer TOKEN \"", "root@host01 ~]# cephadm shell", "ceph mgr module enable dashboard", "ceph dashboard set-ssl-certificate HOST_NAME -i CERT_FILE ceph dashboard set-ssl-certificate-key HOST_NAME -i KEY_FILE", "ceph dashboard set-ssl-certificate -i dashboard.crt ceph dashboard set-ssl-certificate-key -i dashboard.key", "ceph dashboard set-ssl-certificate host01 -i dashboard.crt ceph dashboard set-ssl-certificate-key host01 -i dashboard.key", "ceph dashboard create-self-signed-cert", "echo -n \" PASSWORD \" > PATH_TO_FILE / PASSWORD_FILE ceph dashboard ac-user-create USER_NAME -i PASSWORD_FILE ROLE", "echo -n \"p@ssw0rd\" > /root/dash-password.txt ceph dashboard ac-user-create user1 -i /root/dash-password.txt administrator", "https:// HOST_NAME :8443", "https://host01:8443", "curl --silent --user USER 'https:// CEPH_MANAGER : CEPH_MANAGER_PORT /api/cluster_conf'", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/cluster_conf'", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/cluster_conf', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/cluster_conf', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/cluster_conf", "curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/cluster_conf/ ARGUMENT '", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/cluster_conf/ ARGUMENT '", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/cluster_conf/ ARGUMENT ', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/cluster_conf/ ARGUMENT ', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/cluster_conf/ ARGUMENT", "curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/osd/flags'", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/osd/flags'", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/osd/flags', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/osd/flags', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/osd/flags", "curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/crush_rule'", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/crush_rule'", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/crush_rule', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/crush_rule', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/crush_rule", "curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/monitor'", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/monitor'", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/monitor', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/monitor', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/monitor", "curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/monitor/ NAME '", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/monitor/ NAME '", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/monitor/ NAME ', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/monitor/ NAME ', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/monitor/ NAME", "curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/osd'", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/osd'", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/osd/', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/osd/', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/osd", "curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID '", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID '", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/osd/ ID ', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/osd/ ID ', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/osd/ ID", "curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID /command'", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID /command'", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/osd/ ID /command', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/osd/ ID /command', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/osd/ ID /command", "curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/pool'", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/pool'", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/pool', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/pool', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/pool", "curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/pool/ ID '", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/pool/ ID '", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/pool/ ID ', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/pool/ ID ', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/pool/ ID", "curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/host'", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/host'", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/host', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/host', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/host", "curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/host/ HOST_NAME '", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/host/ HOST_NAME '", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/host/ HOST_NAME ', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/host/ HOST_NAME ', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/host/ HOST_NAME", "echo -En '{\" OPTION \": VALUE }' | curl --request PATCH --data @- --silent --user USER 'https:// CEPH_MANAGER :8080/api/osd/flags'", "echo -En '{\" OPTION \": VALUE }' | curl --request PATCH --data @- --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/osd/flags'", "python >> import requests >> result = requests.patch('https:// CEPH_MANAGER :8080/api/osd/flags', json={\" OPTION \": VALUE }, auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.patch('https:// CEPH_MANAGER :8080/api/osd/flags', json={\" OPTION \": VALUE }, auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "echo -En '{\" STATE \": VALUE }' | curl --request PATCH --data @- --silent --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID '", "echo -En '{\" STATE \": VALUE }' | curl --request PATCH --data @- --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID '", "python >> import requests >> result = requests.patch('https:// CEPH_MANAGER :8080/api/osd/ ID ', json={\" STATE \": VALUE }, auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.patch('https:// CEPH_MANAGER :8080/api/osd/ ID ', json={\" STATE \": VALUE }, auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "echo -En '{\"reweight\": VALUE }' | curl --request PATCH --data @- --silent --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID '", "echo -En '{\"reweight\": VALUE }' | curl --request PATCH --data @- --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID '", "python >> import requests >> result = requests.patch('https:// CEPH_MANAGER :8080/osd/ ID ', json={\"reweight\": VALUE }, auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.patch('https:// CEPH_MANAGER :8080/api/osd/ ID ', json={\"reweight\": VALUE }, auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "echo -En '{\" OPTION \": VALUE }' | curl --request PATCH --data @- --silent --user USER 'https:// CEPH_MANAGER :8080/api/pool/ ID '", "echo -En '{\" OPTION \": VALUE }' | curl --request PATCH --data @- --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/pool/ ID '", "python >> import requests >> result = requests.patch('https:// CEPH_MANAGER :8080/api/pool/ ID ', json={\" OPTION \": VALUE }, auth=(\" USER , \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.patch('https:// CEPH_MANAGER :8080/api/pool/ ID ', json={\" OPTION \": VALUE }, auth=(\" USER , \" PASSWORD \"), verify=False) >> print result.json()", "echo -En '{\"command\": \" COMMAND \"}' | curl --request POST --data @- --silent --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID /command'", "echo -En '{\"command\": \" COMMAND \"}' | curl --request POST --data @- --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID /command'", "python >> import requests >> result = requests.post('https:// CEPH_MANAGER :8080/api/osd/ ID /command', json={\"command\": \" COMMAND \"}, auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.post('https:// CEPH_MANAGER :8080/api/osd/ ID /command', json={\"command\": \" COMMAND \"}, auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "echo -En '{\"name\": \" NAME \", \"pg_num\": NUMBER }' | curl --request POST --data @- --silent --user USER 'https:// CEPH_MANAGER :8080/api/pool'", "echo -En '{\"name\": \" NAME \", \"pg_num\": NUMBER }' | curl --request POST --data @- --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/pool'", "python >> import requests >> result = requests.post('https:// CEPH_MANAGER :8080/api/pool', json={\"name\": \" NAME \", \"pg_num\": NUMBER }, auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.post('https:// CEPH_MANAGER :8080/api/pool', json={\"name\": \" NAME \", \"pg_num\": NUMBER }, auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "mon_allow_pool_delete = true", "curl --request DELETE --silent --user USER 'https:// CEPH_MANAGER :8080/api/pool/ ID '", "curl --request DELETE --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/pool/ ID '", "python >> import requests >> result = requests.delete('https:// CEPH_MANAGER :8080/api/pool/ ID ', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.delete('https:// CEPH_MANAGER :8080/api/pool/ ID ', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/developer_guide/ceph-restful-api
Chapter 16. Deploying and using the Red Hat build of OptaPlanner vehicle route planning starter application
Chapter 16. Deploying and using the Red Hat build of OptaPlanner vehicle route planning starter application As a developer, you can use the OptaWeb Vehicle Routing starter application to optimize your vehicle fleet deliveries. Prerequisites OpenJDK (JDK) 11 is installed. Red Hat build of Open JDK is available from the Software Downloads page in the Red Hat Customer Portal (login required). Apache Maven 3.6 or higher is installed. Maven is available from the Apache Maven Project website. 16.1. What is OptaWeb Vehicle Routing? The main purpose of many businesses is to transport various types of cargo. The goal of these businesses is to deliver a piece of cargo from the loading point to a destination and use its vehicle fleet in the most efficient way. One of the main objectives is to minimize travel costs which are measured in either time or distance. This type of optimization problem is referred to as the vehicle routing problem (VRP) and has many variations. Red Hat build of OptaPlanner can solve many of these vehicle routing variations and provides solution examples. OptaPlanner enables developers to focus on modeling business rules and requirements instead of learning constraint programming theory. OptaWeb Vehicle Routing expands the vehicle routing capabilities of OptaPlanner by providing a starter application that answers questions such as these: Where do I get the distances and travel times? How do I visualize the solution on a map? How do I build an application that runs in the cloud? OptaWeb Vehicle Routing uses OpenStreetMap (OSM) data files. For information about OpenStreetMap, see the OpenStreetMap web site. Use the following definitions when working with OptaWeb Vehicle Routing: Region : An arbitrary area on the map of Earth, represented by an OSM file. A region can be a country, a city, a continent, or a group of countries that are frequently used together. For example, the DACH region includes Germany (DE), Austria (AT), and Switzerland (CH). Country code : A two-letter code assigned to a country by the ISO-3166 standard. You can use a country code to filter geosearch results. Because you can work with a region that spans multiple countries (for example, the DACH region), OptaWeb Vehicle Routing accepts a list of country codes so that geosearch filtering can be used with such regions. For a list of country codes, see ISO 3166 Country Codes Geosearch : A type of query where you provide an address or a place name of a region as the search keyword and receive a number of GPS locations as a result. The number of locations returned depends on how unique the search keyword is. Because most place names are not unique, filter out nonrelevant results by including only places in the country or countries that are in your working region. 16.2. Download and build the OptaWeb Vehicle Routing deployment files You must download and prepare the deployment files before building and deploying OptaWeb Vehicle Routing. Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options: Product: Process Automation Manager Version: 7.13.5 Download Red Hat Process Automation Manager 7.13.5 Kogito and OptaPlanner 8 Decision Services Quickstarts ( rhpam-7.13.5-kogito-and-optaplanner-quickstarts.zip ). Extract the rhpam-7.13.5-kogito-and-optaplanner-quickstarts.zip file. Download Red Hat Process Automation Manager 7.13 Maven Repository Kogito and OptaPlanner 8 Maven Repository ( rhpam-7.13.5-kogito-maven-repository.zip ). Extract the rhpam-7.13.5-kogito-maven-repository.zip file. Copy the contents of the rhpam-7.13.5-kogito-maven-repository/maven-repository subdirectory into the ~/.m2/repository directory. Navigate to the optaweb-8.13.0.Final-redhat-00013/optaweb-vehicle-routing directory. Enter the following command to build OptaWeb Vehicle Routing: 16.3. Run OptaWeb Vehicle Routing locally using the runLocally.sh script Linux users can use the runLocally.sh Bash script to run OptaWeb Vehicle Routing. Note The runLocally.sh script does not run on macOS. If you cannot use the runLocally.sh script, see Section 16.4, "Configure and run OptaWeb Vehicle Routing manually" . The runLocally.sh script automates the following setup steps that otherwise must be carried out manually: Create the data directory. Download selected OpenStreetMap (OSM) files from Geofabrik. Try to associate a country code with each downloaded OSM file automatically. Build the project if the standalone JAR file does not exist. Launch OptaWeb Vehicle Routing by taking a single region argument or by selecting the region interactively. See the following sections for instructions about executing the runLocally.sh script: Section 16.3.1, "Run the OptaWeb Vehicle Routing runLocally.sh script in quick start mode" Section 16.3.2, "Run the OptaWeb Vehicle Routing runLocally.sh script in interactive mode" Section 16.3.3, "Run the OptaWeb Vehicle Routing runLocally.sh script in non-interactive mode" 16.3.1. Run the OptaWeb Vehicle Routing runLocally.sh script in quick start mode The easiest way to get started with OptaWeb Vehicle Routing is to run the runLocally.sh script without any arguments. Prerequisites OptaWeb Vehicle Routing has been successfully built with Maven as described in Section 16.2, "Download and build the OptaWeb Vehicle Routing deployment files" . Internet access is available. Procedure Enter the following command in the rhpam-7.13.5-kogito-and-optaplanner-quickstarts/optaweb-8.13.0.Final-redhat-00013/optaweb-vehicle-routing directory. If prompted to create the .optaweb-vehicle-routing directory, enter y . You are prompted to create this directory the first time you run the script. If prompted to download an OSM file, enter y . The first time that you run the script, OptaWeb Vehicle Routing downloads the Belgium OSM file. The application starts after the OSM file is downloaded. To open the OptaWeb Vehicle Routing user interface, enter the following URL in a web browser: Note The first time that you run the script, it will take a few minutes to start because the OSM file must be imported by GraphHopper and stored as a road network graph. The time you run the runlocally.sh script, load times will be significantly faster. steps Section 16.6, "Using OptaWeb Vehicle Routing" 16.3.2. Run the OptaWeb Vehicle Routing runLocally.sh script in interactive mode Use interactive mode to see the list of downloaded OSM files and country codes assigned to each region. You can use the interactive mode to download additional OSM files from Geofabrik without visiting the website and choosing a destination for the download. Prerequisites OptaWeb Vehicle Routing has been successfully built with Maven as described in Section 16.2, "Download and build the OptaWeb Vehicle Routing deployment files" . Internet access is available. Procedure Change directory to rhpam-7.13.5-kogito-and-optaplanner-quickstarts/optaweb-8.13.0.Final-redhat-00013/optaweb-vehicle-routing . Enter the following command to run the script in interactive mode: At the Your choice prompt, enter d to display the download menu. A list of previously downloaded regions appears followed by a list of regions that you can download. Optional: Select a region from the list of previously downloaded regions: Enter the number associated with a region in the list of downloaded regions. Press the Enter key. Optional: Download a region: Enter the number associated with the region that you want to download. For example, to select the map of Europe, enter 5 . To download the map, enter d then press the Enter key. To download a specific region within the map, enter e then enter the number associated with the region that you want to download, and press the Enter key. Using large OSM files For the best user experience, use smaller regions such as individual European or US states. Using OSM files larger than 1 GB will require significant RAM size and take a lot of time (up to several hours) for the initial processing. The application starts after the OSM file is downloaded. To open the OptaWeb Vehicle Routing user interface, enter the following URL in a web browser: steps Section 16.6, "Using OptaWeb Vehicle Routing" 16.3.3. Run the OptaWeb Vehicle Routing runLocally.sh script in non-interactive mode Use OptaWeb Vehicle Routing in non-interactive mode to start OptaWeb Vehicle Routing with a single command that includes an OSM file that you downloaded previously. This is useful when you want to switch between regions quickly or when doing a demo. Prerequisites OptaWeb Vehicle Routing has been successfully built with Maven as described in Section 16.2, "Download and build the OptaWeb Vehicle Routing deployment files" . The OSM file for the region that you want to use has been downloaded. For information about downloading OSM files, see Section 16.3.2, "Run the OptaWeb Vehicle Routing runLocally.sh script in interactive mode" . Internet access is available. Procedure Change directory to rhpam-7.13.5-kogito-and-optaplanner-quickstarts/optaweb-8.13.0.Final-redhat-00013/optaweb-vehicle-routing . Execute the following command where <OSM_FILE_NAME> is an OSM file that you downloaded previously: steps Section 16.6, "Using OptaWeb Vehicle Routing" 16.3.4. Update the data directory You can update the data directory that OptaWeb Vehicle Routing uses if you want to use a different data directory. The default data directory is USDHOME/.optaweb-vehicle-routing . Prerequisites OptaWeb Vehicle Routing has been successfully built with Maven as described in Section 16.2, "Download and build the OptaWeb Vehicle Routing deployment files" . Procedure To use a different data directory, add the directory's absolute path to the .DATA_DIR_LAST file in the current data directory. To change country codes associated with a region, edit the corresponding file in the country_codes directory, in the current data directory. For example, if you downloaded an OSM file for Scotland and the script fails to guess the country code, set the content of country_codes/scotland-latest to GB. To remove a region, delete the corresponding OSM file from openstreetmap directory in the data directory and delete the region's directory in the graphhopper directory. 16.4. Configure and run OptaWeb Vehicle Routing manually The easiest way to run OptaWeb Vehicle Routing is to use the runlocally.sh script. However, if Bash is not available on your system you can manually complete the steps that the runlocally.sh script performs. Prerequisites OptaWeb Vehicle Routing has been successfully built with Maven as described in Section 16.2, "Download and build the OptaWeb Vehicle Routing deployment files" . Internet access is available. Procedure Download routing data. The routing engine requires geographical data to calculate the time it takes vehicles to travel between locations. You must download and store OpenStreetMap (OSM) data files on the local file system before you run OptaWeb Vehicle Routing. Note The OSM data files are typically between 100 MB to 1 GB and take time to download so it is a good idea to download the files before building or starting the OptaWeb Vehicle Routing application. Open http://download.geofabrik.de/ in a web browser. Click a region in the Sub Region list, for example Europe . The subregion page opens. In the Sub Regions table, download the OSM file ( .osm.pbf ) for a country, for example Belgium. Create the data directory structure. OptaWeb Vehicle Routing reads and writes several types of data on the file system. It reads OSM (OpenStreetMap) files from the openstreetmap directory, writes a road network graph to the graphhopper directory, and persists user data in a directory called db . Create a new directory dedicated to storing all of these data to make it easier to upgrade to a newer version of OptaWeb Vehicle Routing in the future and continue working with the data you created previously. Create the USDHOME/.optaweb-vehicle-routing directory. Create the openstreetmap directory in the USDHOME/.optaweb-vehicle-routing directory: Move all of your downloaded OSM files (files with the extension .osm.pbf ) to the openstreetmap directory. The rest of the directory structure is created by the OptaWeb Vehicle Routing application when it runs for the first time. After that, your directory structure is similar to the following example: Change directory to rhpam-7.13.5-kogito-and-optaplanner-quickstarts/optaweb-8.13.0.Final-redhat-00013/optaweb-vehicle-routing/optaweb-vehicle-routing-standalone/target . To run OptaWeb Vehicle Routing, enter the following command: In this command, replace the following variables: <OSM_FILE_NAME> : The OSM file for the region that you want to use and that you downloaded previously <COUNTRY_CODE_LIST> : A comma-separated list of country codes used to filter geosearch queries. For a list of country codes, see ISO 3166 Country Codes . The application starts after the OSM file is downloaded. In the following example, OptaWeb Vehicle Routing downloads the OSM map of Central America ( central-america-latest.osm.pbf ) and searches in the countries Belize (BZ) and Guatemala (GT). To open the OptaWeb Vehicle Routing user interface, enter the following URL in a web browser: steps Section 16.6, "Using OptaWeb Vehicle Routing" 16.5. Run OptaWeb Vehicle Routing on Red Hat OpenShift Container Platform Linux users can use the runOnOpenShift.sh Bash script to install OptaWeb Vehicle Routing on Red Hat OpenShift Container Platform. Note The runOnOpenShift.sh script does not run on macOS. Prerequisites You have access to an OpenShift cluster and the OpenShift command-line interface ( oc ) has been installed. For information about Red Hat OpenShift Container Platform, see Installing OpenShift Container Platform . OptaWeb Vehicle Routing has been successfully built with Maven as described in Section 16.2, "Download and build the OptaWeb Vehicle Routing deployment files" . Internet access is available. Procedure Log in to or start a Red Hat OpenShift Container Platform cluster. Enter the following command where <PROJECT_NAME> is the name of your new project: If necessary, change directory to rhpam-7.13.5-kogito-and-optaplanner-quickstarts/optaweb-8.13.0.Final-redhat-00013/optaweb-vehicle-routing . Enter the following command to execute the runOnOpenShift.sh script and download an OpenStreetMap (OSM) file: In this command, replace the following variables: <OSM_FILE_NAME> : The name of a file downloaded from <OSM_FILE_DOWNLOAD_URL> . <COUNTRY_CODE_LIST> : A comma-separated list of country codes used to filter geosearch queries. For a list of country codes, see ISO 3166 Country Codes . <OSM_FILE_DOWNLOAD_URL> : The URL of an OSM data file in PBF format accessible from OpenShift. The file will be downloaded during backend startup and saved as /deployments/local/<OSM_FILE_NAME> . In the following example, OptaWeb Vehicle Routing downloads the OSM map of Central America ( central-america-latest.osm.pbf ) and searches in the countries Belize (BZ) and Guatemala (GT). Note For help with the runOnOpenShift.sh script, enter ./runOnOpenShift.sh --help . 16.5.1. Updating the deployed OptaWeb Vehicle Routing application with local changes After you deploy your OptaWeb Vehicle Routing application on Red Hat OpenShift Container Platform, you can update the back end and front end. Prerequisites OptaWeb Vehicle Routing has been successfully built with Maven and deployed on OpenShift. Procedure To update the back end, perform the following steps: Change the source code and build the back-end module with Maven. Change directory to rhpam-7.13.5-kogito-and-optaplanner-quickstarts/optaweb-8.13.0.Final-redhat-00013/optaweb-vehicle-routing . Enter the following command to start the OpenShift build: oc start-build backend --from-dir=. --follow To update the front end, perform the following steps: Change the source code and build the front-end module with the npm utility. Change directory to sources/optaweb-vehicle-routing-frontend . Enter the following command to start the OpenShift build: oc start-build frontend --from-dir=docker --follow steps Section 16.6, "Using OptaWeb Vehicle Routing" 16.6. Using OptaWeb Vehicle Routing In the OptaWeb Vehicle Routing application, you can mark a number of locations on the map. The first location is assumed to be the depot. Vehicles must deliver goods from this depot to every other location that you marked. You can set the number of vehicles and the carrying capacity of every vehicle. However, the route is not guaranteed to use all vehicles. The application uses as many vehicles as required for an optimal route. The current version has certain limitations: Every delivery to a location is supposed to take one point of vehicle capacity. For example, a vehicle with a capacity of 10 can visit up to 10 locations before returning to the depot. Setting custom names of vehicles and locations is not supported. 16.6.1. Creating a route To create an optimal route, use the Demo tab of the OptaWeb Vehicle Routing user interface. Prerequisites OptaWeb Vehicle Routing is running and you have access to the user interface. Procedure In OptaWeb Vehicle Routing, click Demo to open the Demo tab. Use the blue minus and plus buttons above the map to set the number of vehicles. Each vehicle has a default capacity of 10. Use the plus button in a square on the map to zoom in as required. Note Do not double-click to zoom in. A double click also creates a location. Click a location for the depot. Click other locations on the map for delivery points. If you want to delete a location: Hover the mouse cursor over the location to see the location name. Find the location name in the list in the left part of the screen. Click the X icon to the name. Every time you add or remove a location or change the number of vehicles, the application creates and displays a new optimal route. If the solution uses several vehicles, the application shows the route for every vehicle in a different color. 16.6.2. Viewing and setting other details You can use other tabs in the OptaWeb Vehicle Routing user interface to view and set additional details. Prerequisites OptaWeb Vehicle Routing is running and you have access to the user interface. Procedure Click the Vehicles tab to view, add, and remove vehicles, and also set the capacity for every vehicle. Click the Visits tab to view and remove locations. Click the Route tab to select each vehicle and view the route for the selected vehicle. 16.6.3. Creating custom data sets with OptaWeb Vehicle Routing There is a built-in demo data set consisting of a several large Belgian cities. If you want to have more demos available in the Load demo menu, you can prepare your own data sets. Procedure In OptaWeb Vehicle Routing, add a depot and one or more of visits by clicking on the map or using geosearch. Click Export and save the file in the data set directory. Note The data set directory is the directory specified in the app.demo.data-set-dir property. If the application is running through the runLocally.sh script, the data set directory is set to USDHOME/.optaweb-vehicle-routing/dataset . Otherwise, the property is taken from the application.properties file and defaults to rhpam-7.13.5-kogito-and-optaplanner-quickstarts/optaweb-8.13.0.Final-redhat-00013/optaweb-vehicle-routing/optaweb-vehicle-routing-standalone/target/local/dataset . You can edit the app.demo.data-set-dir property to specify a diffent data directory. Edit the YAML file and choose a unique name for the data set. Restart the back end. After you restart the back end, files in the data set directory appear in the Load demo menu. 16.6.4. Troubleshooting OptaWeb Vehicle Routing If the OptaWeb Vehicle Routing behaves unexpectedly, follow this procedure to trouble-shoot. Prerequisites OptaWeb Vehicle Routing is running and behaving unexpectedly. Procedure To identify issues, review the back-end terminal output log. To resolve issues, remove the back-end database: Stop the back end by pressing Ctrl+C in the back-end terminal window. Remove the optaweb-vehicle-routing/optaweb-vehicle-routing-backend/local/db directory. Restart OptaWeb Vehicle Routing. 16.7. OptaWeb Vehicle Routing development guide This section describes how to configure and run the back-end and front-end modules in development mode. 16.7.1. OptaWeb Vehicle Routing project structure The OptaWeb Vehicle Routing project is a multi-module Maven project. Figure 16.1. Module dependency tree diagram The back-end and front-end modules are at the bottom of the module tree. These modules contain the application source code. The standalone module is an assembly module that combines the back end and front end into a single executable JAR file. The distribution module represents the final assembly step. It takes the standalone application and the documentation and wraps them in an archive that is easy to distribute. The back end and front end are separate projects that you can build and deploy separately. In fact, they are written in completely different languages and built with different tools. Both projects have tools that provide a modern developer experience with fast turn-around between code changes and the running application. The sections describe how to run both back-end and front-end projects in development mode. 16.7.2. The OptaWeb Vehicle Routing back-end module The back-end module contains a server-side application that uses Red Hat build of OptaPlanner to optimize vehicle routes. Optimization is a CPU-intensive computation that must avoid any I/O operations in order to perform to its full potential. Because one of the chief objectives is to minimize travel cost, either time or distance, OptaWeb Vehicle Routing keeps the travel cost information in RAM memory. While solving, OptaPlanner needs to know the travel cost between every pair of locations entered by the user. This information is stored in a structure called the distance matrix . When you enter a new location, OptaWeb Vehicle Routing calculates the travel cost between the new location and every other location that has been entered so far, and stores the travel cost in the distance matrix. The travel cost calculation is performed by the GraphHopper routing engine. The back-end module implements the following additional functionality: Persistence WebSocket connection for the front end Data set loading, export, and import To learn more about the back-end code architecture, see Section 16.8, "OptaWeb Vehicle Routing back-end architecture" . The sections describe how to configure and run the back end in development mode. 16.7.2.1. Running the OptaWeb Vehicle Routing back-end module You can run the back-end module in Quarkus development mode. Prerequisites OptaWeb Vehicle Routing has been configured as described in Section 16.4, "Configure and run OptaWeb Vehicle Routing manually" . Procedure Change directory to rhpam-7.13.5-kogito-and-optaplanner-quickstarts/optaweb-8.13.0.Final-redhat-00013/optaweb-vehicle-routing/optaweb-vehicle-routing-backend . To run the back end in development mode, enter the following command: mvn compile quarkus:dev 16.7.2.2. Running the OptaWeb Vehicle Routing back-end module from IntelliJ IDEA Ultimate You can use IntelliJ IDEA Ulitmate to run the OptaWeb Vehicle Routing back-end module to make it easier to develop your project. IntelliJ IDEA Ultimate includes a Quarkus plug-in that automatically creates run configurations for modules that use the Quarkus framework. Procedure Use the optaweb-vehicle-routing-backend run configuration to run the back end. Additional resources For more information, see Run the Quarkus application . 16.7.2.3. Quarkus development mode In development mode, if there are changes to the back-end source code or configuration and you refresh the browser tab where the front end runs, the back-end automatically restarts. Learn more about Quarkus development mode . 16.7.2.4. Changing OptaWeb Vehicle Routing back-end module system property values You can temporarily or permanently override the default system property values of the OptaWeb Vehicle Routing back-end module. The OptaWeb Vehicle Routing back-end module system properties are stored in the /src/main/resources/application.properties file. This file is under version control. Use it to permanently store default configuration property values and to define Quarkus profiles. Prerequisites The OptaWeb Vehicle Routing starter application has been downloaded and extracted. For information, see Section 16.2, "Download and build the OptaWeb Vehicle Routing deployment files" . Procedure To temporarily override a default system property value, include the -D<PROPERTY>=<VALUE> argument when you run the mvn or java command, where <PROPERTY> is the name of the property that you want to change and <VALUE> is the value that you want to temporarily assign to that property. The following example shows how to temporarily change the value of the quarkus.http.port system property to 8181 when you use Maven to compile a Quarkus project in dev mode: This temporarily changes the value of the property stored in the /src/main/resources/application.properties file. To change a configuration value permanently, for example to store a configuration that is specific to your development environment, copy the contents of the env-example file to the optaweb-vehicle-routing-backend/.env file. This file is excluded from version control and therefore it does not exist when you clone the repository. You can make changes in the .env file without affecting the Git working tree. Additional resources For a complete list of OptaWeb Vehicle Routing configuration properties, see Section 16.9, "OptaWeb Vehicle Routing back-end configuration properties" . 16.7.2.5. OptaWeb Vehicle Routing backend logging OptaWeb Vehicle Routing uses the SLF4J API and Logback as the logging framework. For more information, see Quarkus - Configuring Logging . 16.7.3. Working with the OptaWeb Vehicle Routing front-end module The front-end project was bootstrapped with Create React App . Create React App provides a number of scripts and dependencies that help with development and with building the application for production. Prerequisites The OptaWeb Vehicle Routing starter application has been downloaded and extracted. For information, see Section 16.2, "Download and build the OptaWeb Vehicle Routing deployment files" . Procedure On Fedora, enter the following command to set up the development environment: sudo dnf install npm See Downloading and installing Node.js and npm for more information about installing npm. Change directory to rhpam-7.13.5-kogito-and-optaplanner-quickstarts/optaweb-8.13.0.Final-redhat-00013/optaweb-vehicle-routing/optaweb-vehicle-routing-frontend . Install npm dependencies: npm install Unlike Maven, the npm package manager installs dependencies in node_modules under the project directory and does that only when you execute npm install . Whenever the dependencies listed in package.json change, for example when you pull changes to the master branch, you must execute npm install before you run the development server. Enter the following command to run the development server: npm start If it does not open automatically, open http://localhost:3000/ in a web browser. By default, the npm start command attempts to open this URL in your default browser. Note If you do not want the npm start command to open a new browser tab each time you run it, export the BROWSER=none environment variable. You can use .env.local file to make this preference permanent. To do that, enter the following command: echo BROWSER=none >> .env.local The browser refreshes the page whenever you make changes in the front-end source code. The development server process running in the terminal picks up the changes as well and prints compilation and lint errors to the console. Enter the following command to run tests: Change the value of the REACT_APP_BACKEND_URL environment variable to specify the location of the back-end project to be used by npm when you execute npm start or npm run build , for example: Note Environment variables are hard coded inside the JavaScript bundle during the npm build process, so you must specify the back-end location before you build and deploy the front end. To learn more about the React environment variables, see Adding Custom Environment Variables . To build the front end, enter one of the following commands: 16.8. OptaWeb Vehicle Routing back-end architecture Domain model and use cases are essential for the application. The OptaWeb Vehicle Routing domain model is at the center of the architecture and is surround by the application layer that embeds use cases. Functions such as route optimization, distance calculation, persistence, and network communication are considered implementation details and are placed at the outermost layer of the architecture. Figure 16.2. Diagram of application layers 16.8.1. Code organization The back-end code is organized in three layers, illustrated in the preceding graphic. The service package contains the application layer that implements use cases. The plugin package contains the infrastructure layer. Code in each layer is further organized by function. This means that each service or plug-in has its own package. 16.8.2. Dependency rules Compile-time dependencies are only allowed to point from outer layers towards the center. Following this rule helps to keep the domain model independent of underlying frameworks and other implementation details and models the behavior of business entities more precisely. With presentation and persistence being pushed out to the periphery, it is easier to test the behavior of business entities and use cases. The domain has no dependencies. Services only depend on the domain. If a service needs to send a result (for example to the database or to the client), it uses an output boundary interface. Its implementation is injected by the contexts and dependency injection (CDI) container. Plug-ins depend on services in two ways. First, they invoke services based on events such as a user input or a route update coming from the optimization engine. Services are injected into plug-ins which moves the burden of their construction and dependency resolution to the IoC container. Second, plug-ins implement service output boundary interfaces to handle use case results, for example persisting changes to the database or sending a response to the web UI. 16.8.3. The domain package The domain package contains business objects that model the domain of this project, for example Location , Vehicle , Route . These objects are strictly business-oriented and must not be influenced by any tools and frameworks, for example object-relational mapping tools and web service frameworks. 16.8.4. The service package The service package contains classes that implement use cases . A use case describes something that you want to do, for example adding a new location, changing vehicle capacity, or finding coordinates for an address. The business rules that govern use cases are expressed using the domain objects. Services often need to interact with plug-ins in the outer layer, such as persistence, web, and optimization. To satisfy the dependency rules between layers, the interaction between services and plug-ins is expressed in terms of interfaces that define the dependencies of a service. A plug-in can satisfy a dependency of a service by providing a bean that implements the boundary interface of the service. The CDI container creates an instance of the plug-in bean and injects it to the service at runtime. This is an example of the inversion of control principle. 16.8.5. The plugin package The plugin package contains infrastructure functions such as optimization, persistence, routing, and network. 16.9. OptaWeb Vehicle Routing back-end configuration properties You can set the OptaWeb Vehicle Routing application properties listed in the following table. Property Type Example Description app.demo.data-set-dir Relative or absolute path /home/user/.optaweb-vehicle-routing/dataset Custom data sets are loaded from this directory. Defaults to local/dataset . app.persistence.h2-dir Relative or absolute path /home/user/.optaweb-vehicle-routing/db The directory used by H2 to store the database file. Defaults to local/db . app.region.country-codes List of ISO 3166-1 alpha-2 country codes US , GB,IE , DE,AT,CH , may be empty Restricts geosearch results. app.routing.engine Enumeration air , graphhopper Routing engine implementation. Defaults to graphhopper . app.routing.gh-dir Relative or absolute path /home/user/.optaweb-vehicle-routing/graphhopper The directory used by GraphHopper to store road network graphs. Defaults to local/graphhopper . app.routing.osm-dir Relative or absolute path /home/user/.optaweb-vehicle-routing/openstreetmap The directory that contains OSM files. Defaults to local/openstreetmap . app.routing.osm-file File name belgium-latest.osm.pbf Name of the OSM file to be loaded by GraphHopper. The file must be placed under app.routing.osm-dir . optaplanner.solver.termination.spent-limit java.time.Duration 1m 150s P2dT21h ( PnDTnHnMn.nS ) How long the solver should run after a location change occurs. server.address IP address or hostname 10.0.0.123 , my-vrp.geo-1.openshiftapps.com Network address to which to bind the server. server.port Port number 4000 , 8081 Server HTTP port.
[ "mvn clean package -DskipTests", "./runLocally.sh", "http://localhost:8080", "./runLocally.sh -i", "http://localhost:8080", "./runLocally.sh <OSM_FILE_NAME>", "USDHOME/.optaweb-vehicle-routing └── openstreetmap", "USDHOME/.optaweb-vehicle-routing ├── db │ └── vrp.mv.db ├── graphhopper │ └── belgium-latest └── openstreetmap └── belgium-latest.osm.pbf", "java -Dapp.demo.data-set-dir=USDHOME/.optaweb-vehicle-routing/dataset -Dapp.persistence.h2-dir=USDHOME/.optaweb-vehicle-routing/db -Dapp.routing.gh-dir=USDHOME/.optaweb-vehicle-routing/graphhopper -Dapp.routing.osm-dir=USDHOME/.optaweb-vehicle-routing/openstreetmap -Dapp.routing.osm-file=<OSM_FILE_NAME> -Dapp.region.country-codes=<COUNTRY_CODE_LIST> -jar quarkus-app/quarkus-run.jar", "java -Dapp.demo.data-set-dir=USDHOME/.optaweb-vehicle-routing/dataset -Dapp.persistence.h2-dir=USDHOME/.optaweb-vehicle-routing/db -Dapp.routing.gh-dir=USDHOME/.optaweb-vehicle-routing/graphhopper -Dapp.routing.osm-dir=USDHOME/.optaweb-vehicle-routing/openstreetmap -Dapp.routing.osm-file=entral-america-latest.osm.pbf -Dapp.region.country-codes=BZ,GT -jar quarkus-app/quarkus-run.jar", "http://localhost:8080", "new-project <PROJECT_NAME>", "./runOnOpenShift.sh <OSM_FILE_NAME> <COUNTRY_CODE_LIST> <OSM_FILE_DOWNLOAD_URL>", "./runOnOpenShift.sh central-america-latest.osm.pbf BZ,GT http://download.geofabrik.de/europe/central-america-latest.osm.pbf", "start-build backend --from-dir=. --follow", "start-build frontend --from-dir=docker --follow", "mvn compile quarkus:dev", "mvn compile quarkus:dev -Dquarkus.http.port=8181", "sudo dnf install npm", "npm install", "npm start", "echo BROWSER=none >> .env.local", "npm test", "REACT_APP_BACKEND_URL=http://10.0.0.123:8081", "./mvnw install", "mvn install", "org.optaweb.vehiclerouting ├── domain ├── plugin # Infrastructure layer │ ├── persistence │ ├── planner │ ├── routing │ └── rest └── service # Application layer ├── demo ├── distance ├── error ├── location ├── region ├── reload ├── route └── vehicle" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_process_automation_manager/assembly-business-optimizer-vrp
10.2.4. User and System Connections
10.2.4. User and System Connections NetworkManager connections are always either user connections or system connections . Depending on the system-specific policy that the administrator has configured, users may need root privileges to create and modify system connections. NetworkManager 's default policy enables users to create and modify user connections, but requires them to have root privileges to add, modify or delete system connections. User connections are so-called because they are specific to the user who creates them. In contrast to system connections, whose configurations are stored under the /etc/sysconfig/network-scripts/ directory (mainly in ifcfg- <network_type> interface configuration files), user connection settings are stored in the GConf configuration database and the GNOME keyring, and are only available during login sessions for the user who created them. Thus, logging out of the desktop session causes user-specific connections to become unavailable. Note Because NetworkManager uses the GConf and GNOME keyring applications to store user connection settings, and because these settings are specific to your desktop session, it is highly recommended to configure your personal VPN connections as user connections. If you do so, other Non- root users on the system cannot view or access these connections in any way. System connections, on the other hand, become available at boot time and can be used by other users on the system without first logging in to a desktop session. NetworkManager can quickly and conveniently convert user to system connections and vice versa. Converting a user connection to a system connection causes NetworkManager to create the relevant interface configuration files under the /etc/sysconfig/network-scripts/ directory, and to delete the GConf settings from the user's session. Conversely, converting a system to a user-specific connection causes NetworkManager to remove the system-wide configuration files and create the corresponding GConf/GNOME keyring settings. Figure 10.5. The Available to all users check box controls whether connections are user-specific or system-wide Procedure 10.2. Changing a Connection to be User-Specific instead of System-Wide, or Vice-Versa Note Depending on the system's policy, you may need root privileges on the system in order to change whether a connection is user-specific or system-wide. Right-click on the NetworkManager applet icon in the Notification Area and click Edit Connections . The Network Connections window appears. If needed, select the arrow head (on the left hand side) to hide and reveal the types of available network connections. Select the specific connection that you want to configure and click Edit . Check the Available to all users check box to ask NetworkManager to make the connection a system-wide connection. Depending on system policy, you may then be prompted for the root password by the PolicyKit application. If so, enter the root password to finalize the change. Conversely, uncheck the Available to all users check box to make the connection user-specific.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-user_and_system_connections
8.6. Managed Resources
8.6. Managed Resources You can set a resource to unmanaged mode, which indicates that the resource is still in the configuration but Pacemaker does not manage the resource. The following command sets the indicated resources to unmanaged mode. The following command sets resources to managed mode, which is the default state. You can specify the name of a resource group with the pcs resource manage or pcs resource unmanage command. The command will act on all of the resources in the group, so that you can set all of the resources in a group to managed or unmanaged mode with a single command and then manage the contained resources individually.
[ "pcs resource unmanage resource1 [ resource2 ]", "pcs resource manage resource1 [ resource2 ]" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-managedresource-haar
Red Hat Virtualization Upgrade Overview
Red Hat Virtualization Upgrade Overview This guide explains how to upgrade the following environments to Red Hat Virtualization 4.3 or 4.4 : Self-hosted engine, local database : Both the Data Warehouse database and the Manager database are installed on the Manager. Standalone manager, local database : Both the Data Warehouse database and the Manager database are installed on the Manager. Standalone manager, remote database : Either the Data Warehouse database or the Manager database, or both, are on a separate machine. Note For a checklist of upgrade instructions, you can use the RHV Upgrade Helper . This application asks you to fill in a checklist for your upgrade path and current environment, and presents the applicable upgrade steps. Important Plan any necessary downtime in advance. After you update the clusters' compatibility versions during the upgrade, a new hardware configuration is automatically applied to each virtual machine once it reboots. You must reboot any running or suspended VMs as soon as possible to apply the configuration changes. Select the appropriate instructions for your environment from the following table. If your Manager and host versions differ (if you have previously upgraded the Manager but not the hosts), follow the instructions that match the Manager's version. Table 1. Supported Upgrade Paths Current Manager version Target Manager version Relevant section 4.3 4.4 Self-hosted engine, local database environment: Upgrading a self-Hosted engine from Red Hat Virtualization 4.3 to 4.4 Local database environment - Upgrading from Red Hat Virtualization 4.3 to 4.4 Remote database environment: Upgrading a Remote Database Environment from Red Hat Virtualization 4.3 to 4.4 4.2 4.3 Self-hosted engine, local database environment: Upgrading a Self-Hosted Engine from Red Hat Virtualization 4.2 to 4.3 Local database environment: Upgrading from Red Hat Virtualization 4.2 to 4.3 Remote database environment: Upgrading a Remote Database Environment from Red Hat Virtualization 4.2 to 4.3
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/upgrade_guide/upgrade-overview
Chapter 4. Identity and Access Management
Chapter 4. Identity and Access Management Red Hat Ceph Storage provides identity and access management for: Ceph Storage Cluster User Access Ceph Object Gateway User Access Ceph Object Gateway LDAP/AD Authentication Ceph Object Gateway OpenStack Keystone Authentication 4.1. Ceph Storage Cluster User Access To identify users and protect against man-in-the-middle attacks, Ceph provides its cephx authentication system to authenticate users and daemons. For additional details on cephx , see Ceph user management . Important The cephx protocol DOES NOT address data encryption in transport or encryption at rest. Cephx uses shared secret keys for authentication, meaning both the client and the monitor cluster have a copy of the client's secret key. The authentication protocol is such that both parties are able to prove to each other they have a copy of the key without actually revealing it. This provides mutual authentication, which means the cluster is sure the user possesses the secret key, and the user is sure that the cluster has a copy of the secret key. Users are either individuals or system actors such as applications, which use Ceph clients to interact with the Red Hat Ceph Storage cluster daemons. Ceph runs with authentication and authorization enabled by default. Ceph clients may specify a user name and a keyring containing the secret key of the specified user- usually by using the command line. If the user and keyring are not provided as arguments, Ceph will use the client.admin administrative user as the default. If a keyring is not specified, Ceph will look for a keyring by using the keyring setting in the Ceph configuration. Important To harden a Ceph cluster, keyrings SHOULD ONLY have read and write permissions for the current user and root . The keyring containing the client.admin administrative user key must be restricted to the root user. For details on configuring the Red Hat Ceph Storage cluster to use authentication, see Configuration Guide for Red Hat Ceph Storage 4. More specifically, see CephX Configuration Reference . 4.2. Ceph Object Gateway User Access The Ceph Object Gateway provides a RESTful API service with its own user management that authenticates and authorizes users to access S3 and Swift APIs containing user data. Authentication consists of: S3 User: An access key and secret for a user of the S3 API. Swift User: An access key and secret for a user of the Swift API. The Swift user is a subuser of an S3 user. Deleting the S3 'parent' user will delete the Swift user. Administrative User: An access key and secret for a user of the administrative API. Administrative users should be created sparingly, as the administrative user will be able to access the Ceph Admin API and execute its functions, such as creating users, and giving them permissions to access buckets or containers and their objects among other things. The Ceph Object Gateway stores all user authentication information in Ceph Storage cluster pools. Additional information may be stored about users including names, email addresses, quotas and usage. For additional details, see User Management and Creating an Administrative User . 4.3. Ceph Object Gateway LDAP/AD Authentication Red Hat Ceph Storage supports Light-weight Directory Access Protocol (LDAP) servers for authenticating Ceph Object Gateway users. When configured to use LDAP or Active Directory, Ceph Object Gateway defers to an LDAP server to authenticate users of the Ceph Object Gateway. Ceph Object Gateway controls whether to use LDAP. However, once configured, it is the LDAP server that is responsible for authenticating users. To secure communications between the Ceph Object Gateway and the LDAP server, Red Hat recommends deploying configurations with LDAP Secure or LDAPS. Important When using LDAP, ensure that access to the rgw_ldap_secret = <path-to-secret> secret file is secure. For additional details, see the Ceph Object Gateway with LDAP/AD Guide . 4.4. Ceph Object Gateway OpenStack Keystone Authentication Red Hat Ceph Storage supports using OpenStack Keystone to authenticate Ceph Object Gateway Swift API users. The Ceph Object Gateway can accept a Keystone token, authenticate the user and create a corresponding Ceph Object Gateway user. When Keystone validates a token, the Ceph Object Gateway considers the user authenticated. Ceph Object Gateway controls whether to use OpenStack Keystone for authentication. However, once configured, it is the OpenStack Keystone service that is responsible for authenticating users. Configuring the Ceph Object Gateway to work with Keystone requires converting the OpenSSL certificates that Keystone uses for creating the requests to the nss db format. See Using Keystone to Authenticate Ceph Object Gateway Users for details.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/data_security_and_hardening_guide/assembly-identity-and-access-management
Part I. Developer Guide
Part I. Developer Guide This part contains information for developers.
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_apache_karaf/karaf-userguide
Chapter 5. Users and organizations
Chapter 5. Users and organizations Before creating repositories to contain your container images in Quay.io, you should consider how these repositories will be structured. With Quay.io, each repository requires a connection with either an Organization or a User . This affiliation defines ownership and access control for the repositories. 5.1. Tenancy model Organizations provide a way of sharing repositories under a common namespace that does not belong to a single user. Instead, these repositories belong to several users in a shared setting, such as a company. Teams provide a way for an Organization to delegate permissions. Permissions can be set at the global level (for example, across all repositories), or on specific repositories. They can also be set for specific sets, or groups, of users. Users can log in to a registry through the web UI or a by using a client, such as Podman or Docker, using their respective login commands, for example, USD podman login . Each user automatically gets a user namespace, for example, <quay-server.example.com>/<user>/<username> , or quay.io/<username> . Robot accounts provide automated access to repositories for non-human users like pipeline tools. Robot accounts are similar to OpenShift Container Platform Service Accounts . Permissions can be granted to a robot account in a repository by adding that account like you would another user or team. 5.2. Logging into Quay A user account for Quay.io represents an individual with authenticated access to the platform's features and functionalities. Through this account, you gain the capability to create and manage repositories, upload and retrieve container images, and control access permissions for these resources. This account is pivotal for organizing and overseeing your container image management within Quay.io. Note Not all features on Quay.io require that users be logged in. For example, you can anonymously pull an image from Quay.io without being logged in, so long as the image you are pulling comes from a public repository. Users have two options for logging into Quay.io: By logging in through Quay.io. This option provides users with the legacy UI, as well as an option to user the beta UI environment, which adheres to PatternFly UI principles. By logging in through the Red Hat Hybrid Cloud Console . This option uses Red Hat SSO for authentication, and is a public managed service offering by Red Hat. This option always requires users to login. Like other managed services, Quay on the Red Hat Hybrid Cloud Console enhances the user experience by adhering to PatternFly UI principles. Differences between using Quay.io directly and Quay on the Red Hat Hybrid Cloud Console are negligible, including for users on the free tier. Whether you are using Quay.io directly, on the Hybrid Cloud Console, features that require login, such as pushing to a repository, use your Quay.io username specifications. 5.2.1. Logging into Quay.io Use the following procedure to log into Quay.io. Prerequisites You have created a Red Hat account and a Quay.io account. For more information, see "Creating a Quay.io account". Procedure Navigate to Quay.io . In the navigation pane, select Sign In and log in using your Red Hat credentials. If it is your first time logging in, you must confirm the automatically-generated username. Click Confirm Username to log in. You are redirected to the Quay.io repository landing page. 5.2.2. Logging into Quay through the Hybrid Cloud Console Prerequisites You have created a Red Hat account and a Quay.io account. For more information, see "Creating a Quay.io account". Procedure Navigate to the Quay on the Red Hat Hybrid Cloud Console and log in using your Red Hat account. You are redirected to the Quay repository landing page: 5.3. Creating a repository A repository provides a central location for storing a related set of container images. These images can be used to build applications along with their dependencies in a standardized format. Repositories are organized by namespaces. Each namespace can have multiple repositories. For example, you might have a namespace for your personal projects, one for your company, or one for a specific team within your organization. With a paid plan, Quay.io provides users with access controls for their repositories. Users can make a repository public, meaning that anyone can pull, or download, the images from it, or users can make it private, restricting access to authorized users or teams. Note The free tier of Quay.io does not allow for private repositories. You must upgrade to a paid tier of Quay.io to create a private repository. For more information, see "Information about Quay.io pricing". There are two ways to create a repository in Quay.io: by pushing an image with the relevant docker or podman command, or by using the Quay.io UI. If you push an image through the command-line interface (CLI) without first creating a repository on the UI, the created repository is set to Private , regardless of the plan you have. Note It is recommended that you create a repository on the Quay.io UI before pushing an image. Quay.io checks the plan status and does not allow creation of a private repository if a plan is not active. 5.3.1. Creating an image repository by using the UI Use the following procedure to create a repository using the Quay.io UI. Procedure Log in to your user account through the web UI. On the Quay.io landing page, click Create New Repository . Alternatively, you can click the + icon New Repository . For example: On the Create New Repository page: Append a Repository Name to your username or to the Organization that you wish to use. Important Do not use the following words in your repository name: * build * trigger * tag When these words are used for repository names, users are unable access the repository, and are unable to permanently delete the repository. Attempting to delete these repositories returns the following error: Failed to delete repository <repository_name>, HTTP404 - Not Found. Optional. Click Click to set repository description to add a description of the repository. Click Public or Private depending on your needs. Optional. Select the desired repository initialization. Click Create Private Repository to create a new, empty repository. 5.3.2. Creating an image repository by using the CLI With the proper credentials, you can push an image to a repository using either Docker or Podman that does not yet exist in your Quay.io instance. Pushing an image refers to the process of uploading a container image from your local system or development environment to a container registry like Quay.io. After pushing an image to Quay.io, a repository is created. If you push an image through the command-line interface (CLI) without first creating a repository on the UI, the created repository is set to Private , regardless of the plan you have. Note It is recommended that you create a repository on the Quay.io UI before pushing an image. Quay.io checks the plan status and does not allow creation of a private repository if a plan is not active. Use the following procedure to create an image repository by pushing an image. Prerequisites You have download and installed the podman CLI. You have logged into Quay.io. You have pulled an image, for example, busybox. Procedure Pull a sample page from an example registry. For example: Example output Trying to pull docker.io/library/busybox... Getting image source signatures Copying blob 4c892f00285e done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9 Tag the image on your local system with the new repository and image name. For example: USD podman tag docker.io/library/busybox quay.io/quayadmin/busybox:test Push the image to the registry. Following this step, you can use your browser to see the tagged image in your repository. USD podman push --tls-verify=false quay.io/quayadmin/busybox:test Example output Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 5.4. Managing access to repositories As a Quay.io user, you can create your own repositories and make them accessible to other users that are part of your instance. Alternatively, you can create a specific Organization to allow access to repositories based on defined teams. In both User and Organization repositories, you can allow access to those repositories by creating credentials associated with Robot Accounts. Robot Accounts make it easy for a variety of container clients, such as Docker or Podman, to access your repositories without requiring that the client have a Quay.io user account. 5.4.1. Allowing access to user repositories When you create a repository in a user namespace, you can add access to that repository to user accounts or through Robot Accounts. 5.4.1.1. Allowing user access to a user repository Use the following procedure to allow access to a repository associated with a user account. Procedure Log into Quay.io with your user account. Select a repository under your user namespace that will be shared across multiple users. Select Settings in the navigation pane. Type the name of the user to which you want to grant access to your repository. As you type, the name should appear. For example: In the permissions box, select one of the following: Read . Allows the user to view and pull from the repository. Write . Allows the user to view the repository, pull images from the repository, or push images to the repository. Admin . Provides the user with all administrative settings to the repository, as well as all Read and Write permissions. Select the Add Permission button. The user now has the assigned permission. Optional. You can remove or change user permissions to the repository by selecting the Options icon, and then selecting Delete Permission . 5.4.1.2. Allowing robot access to a user repository Robot Accounts are used to set up automated access to the repositories in your Quay.io registry. They are similar to OpenShift Container Platform service accounts. Setting up a Robot Account results in the following: Credentials are generated that are associated with the Robot Account. Repositories and images that the Robot Account can push and pull images from are identified. Generated credentials can be copied and pasted to use with different container clients, such as Docker, Podman, Kubernetes, Mesos, and so on, to access each defined repository. Each Robot Account is limited to a single user namespace or Organization. For example, the Robot Account could provide access to all repositories for the user jsmith . However, it cannot provide access to repositories that are not in the user's list of repositories. Use the following procedure to set up a Robot Account that can allow access to your repositories. Procedure On the Repositories landing page, click the name of a user. Click Robot Accounts on the navigation pane. Click Create Robot Account . Provide a name for your Robot Account. Optional. Provide a description for your Robot Account. Click Create Robot Account . The name of your Robot Account becomes a combination of your username plus the name of the robot, for example, jsmith+robot Select the repositories that you want the Robot Account to be associated with. Set the permissions of the Robot Account to one of the following: None . The Robot Account has no permission to the repository. Read . The Robot Account can view and pull from the repository. Write . The Robot Account can read (pull) from and write (push) to the repository. Admin . Full access to pull from, and push to, the repository, plus the ability to do administrative tasks associated with the repository. Click the Add permissions button to apply the settings. On the Robot Accounts page, select the Robot Account to see credential information for that robot. Under the Robot Account option, copy the generated token for the robot by clicking Copy to Clipboard . To generate a new token, you can click Regenerate Token . Note Regenerating a token makes any tokens for this robot invalid. Obtain the resulting credentials in the following ways: Kubernetes Secret : Select this to download credentials in the form of a Kubernetes pull secret yaml file. rkt Configuration : Select this to download credentials for the rkt container runtime in the form of a .json file. Docker Login : Select this to copy a full docker login command line that includes the credentials. Docker Configuration : Select this to download a file to use as a Docker config.json file, to permanently store the credentials on your client system. Mesos Credentials : Select this to download a tarball that provides the credentials that can be identified in the URI field of a Mesos configuration file. 5.4.2. Organization repositories After you have created an Organization, you can associate a set of repositories directly to that Organization. An Organization's repository differs from a basic repository in that the Organization is intended to set up shared repositories through groups of users. In Quay.io, groups of users can be either Teams , or sets of users with the same permissions, or individual users . Other useful information about Organizations includes the following: You cannot have an Organization embedded within another Organization. To subdivide an Organization, you use teams. Organizations cannot contain users directly. You must first add a team, and then add one or more users to each team. Note Individual users can be added to specific repositories inside of an organization. Consequently, those users are not members of any team on the Repository Settings page. The Collaborators View on the Teams and Memberships page shows users who have direct access to specific repositories within the organization without needing to be part of that organization specifically. Teams can be set up in Organizations as just members who use the repositories and associated images, or as administrators with special privileges for managing the Organization. 5.4.2.1. Creating an Organization Use the following procedure to create an Organization. Procedure On the Repositories landing page, click Create New Organization . Under Organization Name , enter a name that is at least 2 characters long, and less than 225 characters long. Under Organization Email , enter an email that is different from your account's email. Choose a plan for your Organization, selecting either the free plan, or one of the paid plans. Click Create Organization to finalize creation. 5.4.2.1.1. Creating another Organization by using the API You can create another Organization by using the API. To do this, you must have created the first Organization by using the UI. You must also have generated an OAuth Access Token. Use the following procedure to create another Organization by using the Red Hat Quay API endpoint. Prerequisites You have already created at least one Organization by using the UI. You have generated an OAuth Access Token. For more information, see "Creating an OAuth Access Token". Procedure Create a file named data.json by entering the following command: USD touch data.json Add the following content to the file, which will be the name of the new Organization: {"name":"testorg1"} Enter the following command to create the new Organization using the API endpoint, passing in your OAuth Access Token and Red Hat Quay registry endpoint: USD curl -X POST -k -d @data.json -H "Authorization: Bearer <access_token>" -H "Content-Type: application/json" http://<quay-server.example.com>/api/v1/organization/ Example output "Created" 5.4.2.2. Adding a team to an organization When you create a team for your Organization you can select the team name, choose which repositories to make available to the team, and decide the level of access to the team. Use the following procedure to create a team for your Organization. Prerequisites You have created an organization. Procedure On the Repositories landing page, select an Organization to add teams to. In the navigation pane, select Teams and Membership . By default, an owners team exists with Admin privileges for the user who created the Organization. Click Create New Team . Enter a name for your new team. Note that the team must start with a lowercase letter. It can also only use lowercase letters and numbers. Capital letters or special characters are not allowed. Click Create team . Click the name of your team to be redirected to the Team page. Here, you can add a description of the team, and add team members, like registered users, robots, or email addresses. For more information, see "Adding users to a team". Click the No repositories text to bring up a list of available repositories. Select the box of each repository you will provide the team access to. Select the appropriate permissions that you want the team to have: None . Team members have no permission to the repository. Read . Team members can view and pull from the repository. Write . Team members can read (pull) from and write (push) to the repository. Admin . Full access to pull from, and push to, the repository, plus the ability to do administrative tasks associated with the repository. Click Add permissions to save the repository permissions for the team. 5.4.2.3. Setting a Team role After you have added a team, you can set the role of that team within the Organization. Prerequisites You have created a team. Procedure On the Repository landing page, click the name of your Organization. In the navigation pane, click Teams and Membership . Select the TEAM ROLE drop-down menu, as shown in the following figure: For the selected team, choose one of the following roles: Member . Inherits all permissions set for the team. Creator . All member permissions, plus the ability to create new repositories. Admin . Full administrative access to the organization, including the ability to create teams, add members, and set permissions. 5.4.2.4. Adding users to a Team With administrative privileges to an Organization, you can add users and robot accounts to a team. When you add a user, Quay.io sends an email to that user. The user remains pending until they accept the invitation. Use the following procedure to add users or robot accounts to a team. Procedure On the Repository landing page, click the name of your Organization. In the navigation pane, click Teams and Membership . Select the team you want to add users or robot accounts to. In the Team Members box, enter information for one of the following: A username from an account on the registry. The email address for a user account on the registry. The name of a robot account. The name must be in the form of <organization_name>+<robot_name>. Note Robot Accounts are immediately added to the team. For user accounts, an invitation to join is mailed to the user. Until the user accepts that invitation, the user remains in the INVITED TO JOIN state. After the user accepts the email invitation to join the team, they move from the INVITED TO JOIN list to the MEMBERS list for the Organization. 5.5. User settings The User Settings page provides users a way to set their email address, password, account type, set up desktop notifications, select an avatar, delete an account, adjust the time machine setting, and view billing information. 5.5.1. Navigating to the User Settings page Use the following procedure to navigate to the User Settings page. Procedure On Quay.io, click your username in the header. Select Account Settings . You are redirected to the User Settings page. 5.5.2. Adjusting user settings Use the following procedure to adjust user settings. Procedure To change your email address, select the current email address for Email Address . In the pop-up window, enter a new email address, then, click Change Email . A verification email will be sent before the change is applied. To change your password, click Change password . Enter the new password in both boxes, then click Change Password . Change the account type by clicking Individual Account , or the option to Account Type . In some cases, you might have to leave an organization prior to changing the account type. Adjust your desktop notifications by clicking the option to Desktop Notifications . Users can either enable, or disable, this feature. You can delete an account by clicking Begin deletion . You cannot delete an account if you have an active plan, or if you are a member of an organization where you are the only administrator. You must confirm deletion by entering the namespace. Important Deleting an account is not reversible and will delete all of the account's data including repositories, created build triggers, and notifications. You can set the time machine feature by clicking the drop-box to Time Machine . This feature dictates the amount of time after a tag is deleted that the tag is accessible in time machine before being garbage collected. After selecting a time, click Save Expiration Time . 5.5.3. Billing information You can view billing information on the User Settings . In this section, the following information is available: Current Plan . This section denotes the current plan Quay.io plan that you are signed up for. It also shows the amount of private repositories you have. Invoices . If you are on a paid plan, you can click View Invoices to view a list of invoices. Receipts . If you are on a paid plan, you can select whether to have receipts for payment emailed to you, another user, or to opt out of receipts altogether.
[ "podman pull busybox", "Trying to pull docker.io/library/busybox Getting image source signatures Copying blob 4c892f00285e done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9", "podman tag docker.io/library/busybox quay.io/quayadmin/busybox:test", "podman push --tls-verify=false quay.io/quayadmin/busybox:test", "Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures", "touch data.json", "{\"name\":\"testorg1\"}", "curl -X POST -k -d @data.json -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" http://<quay-server.example.com>/api/v1/organization/", "\"Created\"" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/about_quay_io/user-org-intro_quay-io
8.77. iscsi-initiator-utils
8.77. iscsi-initiator-utils 8.77.1. RHBA-2013:1700 - iscsi-initiator-utils bug fix and enhancement update Updated iscsi-initiator-utils packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The iscsi-initiator-utils packages provide the server daemon for the iSCSI protocol, as well as utilities used to manage the daemon. iSCSI (Internet Small Computer System Interface) is a protocol for distributed disk access using SCSI commands sent over Internet Protocol networks. Note The iscsi-initiator-utils packages have been upgraded to upstream version 6.2.0.873, which provides a number of bug fixes and enhancements over the version. (BZ# 916007 ) Bug Fixes BZ# 884427 Previously, database errors could occur if multiple node records in different formats were created for the same iSCSI target portal. Consequently, depending on the file system dependent return order of the readdir syscall, an error occasionally occurred causing an update operation to fail. To fix this bug, multiple node records in different formats have been prevented from existing simultaneously and detected at record creation time. Duplicate node entries no longer exist in the iSCSI database, and updates to records do not result in database errors. BZ# 983553 Prior to this update, a single unreachable target could previously block rescans of others. Consequently, the iscsiadm utility could halt in the D state and the rest of the targets could remain unscanned. To fix this bug, iscsiadm has been made terminable and all the targets have been updated. Now, functioning sessions will be rescanned properly without long delays. BZ# 1001705 When VDMS (Virtual Desktop Server Manager) attempted to add a new record to the iSCSI database, it failed with the following error: iscsiadm: Error while adding record: no available memory. Consequently, due to this error, the host became non-operational when connecting to storage. An upstream patch has been applied and the /var/lib/iscsi file is now successfully attached. Enhancements BZ# 831003 For the bnx2i hardware and potentially other offloading solutions (complementary network technologies for delivering data originally targeted for cellular networks), the iscsistart tool for passing along the VLAN tag from iBFT (iSCSI Boot Firmware Table) to iface_rec (iscsi iface record name) has been implemented to this package. BZ# 917600 With this update, support for managing Flash nodes from the open-iscsi utility has been added to this package. Users of iscsi-initiator-utils are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/iscsi-initiator-utils
Chapter 17. Business processes in Business Central
Chapter 17. Business processes in Business Central A business process application created by a citizen developer in Business Central depicts the flow of the business process as a series of steps. Each step executes according to the process flow chart. A process can consist of one or more smaller discrete tasks. As a knowledge worker, you work on processes and tasks that occur during business process execution. As an example, using Red Hat Process Automation Manager, the mortgage department of a financial institution can automate the complete business process for a mortgage loan. When a new mortgage request comes in, a new process instance is created in the mortgage application. Because all requests follow the same set of rules for processing, consistency in every step is ensured. This results in an efficient process that reduces processing time and effort. 17.1. Knowledge worker user Consider the example of a customer account representative processing mortgage loan requests at a financial institution. As a customer account representative, you can perform the following tasks: Accept and decline mortgage requests Search and filter through requests Delegate, claim, and release requests Set a due date and priority on requests View and comment on requests View the request history log
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/interacting_with_processes_overview_con
Hardware accelerators
Hardware accelerators OpenShift Container Platform 4.16 Hardware accelerators Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/hardware_accelerators/index
Chapter 11. User Defined Functions
Chapter 11. User Defined Functions 11.1. User Defined Functions Teiid Designer enables you to extend Teiid's function library by using custom or User Defined Functions(UDFs). You can define a UDF with follwing properties. Function Name - When you create the function name, remember that: You cannot overload existing Teiid System functions. The function name must be unique among user defined functions in its model for the number of arguments. You can use the same function name for different numbers of types of arguments. Hence, you can overload your user defined functions. The function name cannot contain the '.' character. The function name cannot exceed 255 characters. Input Parameters- Input Parameter defines a type specific signature list. All arguments are considered required. Return Type- the expected type of the returned scalar value. Pushdown - Indicates the expected pushdown behavior. It can be one of REQUIRED, NEVER, ALLOWED. If NEVER or ALLOWED are specified then a Java implementation of the function must be supplied. If REQUIRED is used, then user must extend the Translator for the source and add this function to its pushdown function library. invocationClass/invocationMethod- optional properties indicating the method to invoke when the UDF is not pushed down. Deterministic - if the method always returns the same result for the same input parameters. Defaults to false. It is important to mark the function as deterministic if it returns the same value for the same inputs as this leads to better performance. Even Pushdown required functions need to be added as a UDF to allow Teiid to properly parse and resolve the function. Pushdown scalar functions differ from normal user defined functions in that no code is provided for evaluation in the engine. An exception is raised if a pushdown required function cannot be evaluated by the appropriate source.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/chap-user_defined_functions
Chapter 7. August 2024
Chapter 7. August 2024 7.1. Product-wide updates 7.1.1. Published blogs and resources Customize RHEL images with RHEL system roles and Insights image builder by Brian Smith (August 14, 2024) Save, edit, and share blueprints in Insights image builder by Terry Bowling (August 16, 2024) 7.2. Red Hat Insights for Red Hat Enterprise Linux 7.2.1. advisor Recommended guidance for the End of Red Hat Enterprise Linux 6 Extended Lifecycle Support (ELS) period In light of the official end of the Extended Lifecycle Support (ELS) for RHEL 6, it is strongly recommended that all Red Hat Enterprise Linux 6 systems upgrade to Red Hat Enterprise Linux 7 or Red Hat Enterprise Linux 8. This is necessary to obtain full support. No other recommendations are available for these systems. For more information, see the following: Red Hat does not provide technical support services for Red Hat Enterprise Linux 6 Guidance for Upgrading RHEL6 past the RHEL6 ELS period Red Hat Enterprise Linux Lifecycle Issue prevention recommendations We released 9 new recommendations to prevent issues across various Red Hat Enterprise Linux system components. This includes issues such as firmware, kernel, SSSD, RAID5, in-place upgrade, NIC firmware, and grub2, which can cause system failures, crashes, or other challenges: System fails to boot due to the known issue in BIOS System boot failure occurs when the grub file is empty or missing Kernel crash occurs on the CephFS client due to a known bug in the running kernel SSSD enters a failed state RAID5 md device hang occurs Leapp fails to upgrade RHEL 7 systems to RHEL 8 when the grub is not configured correctly Multicast packet amplification Grub2 modification requires symbolic link 7.2.2. drift Drift end-of-life As of September 30 2024, the drift service, provided in Red Hat Insights for RHEL, will be removed from the product. For more information about the discontinuation of the drift service, contact: Red Hat customer service 7.2.3. Insights image builder Harness the power of image builder Image builder has a convenient landing page with an overview, interactive labs, links to documentation, blog posts and videos. Learn how this feature can help you ensure consistent provisioning and deployment across all environments. Manage images with the blueprints feature Insights image builder now enables you to alter an image with the blueprints feature. This feature is available in developer preview mode and is displayed in the left sidebar. You can save, edit, and download blueprints to share with colleagues. First boot scripts feature The first boot scripts feature is now in full production support mode. For more information, see the following: Add first boot scripts to golden images Learn about Red Hat Enterprise Linux and Insights image builder 7.2.4. inventory Notifications and integrations events in inventory The inventory service now triggers New system registered and System deleted events. These occur when a system is newly registered in inventory or removed. These events are triggered both manually and automatically. You can manually trigger these alerts when you add a new system to your inventory. Events might be automatically triggered when a system's state changes. For more information about system states and staleness and deletion, see the following: Systems lifecycle in the inventory application Modifying system staleness and deletion time limits in inventory You can configure responses to these events for each account. You can send emails to groups of users, if they allow subscriptions in their user preferences. You can also forward these events to third-party applications such as Splunk, ServiceNow, Event-Driven Ansible, Slack, Microsoft Teams, and Google Chat. You can also forward these events by using a generic webhook. For more information, see the following resources: Configuring user preferences for email notifications Integrating the Red Hat Red Hat Hybrid Cloud Console with third-party applications These new events are particularly useful for driving automation and integrating Red Hat Insights into your operational workflows. They can automatically launch compliance or malware detection checks, validate systems assignments to Workspaces, update external configuration management database (CMDB) records, or continuously monitor your Red Hat Enterprise Linux environment.
null
https://docs.redhat.com/en/documentation/red_hat_insights_overview/1-latest/html/release_notes/august-2024
Chapter 18. Authenticating Business Central through RH-SSO
Chapter 18. Authenticating Business Central through RH-SSO This chapter describes how to authenticate Business Central through RH-SSO. It includes the following sections: Section 18.1, "Creating the Business Central client for RH-SSO" Section 18.2, "Installing the RH-SSO client adapter for Business Central" Section 18.3, "Enabling access to external file systems and Git repository services for Business Central using RH-SSO" Prerequisites Business Central is installed in a Red Hat JBoss EAP 7.4 server, as described in Installing and configuring Red Hat Process Automation Manager on Red Hat JBoss EAP 7.4 . RH-SSO is installed as described in Chapter 16, Installing and configuring RH-SSO . You added Business Central users to RH-SSO as described in Section 17.1, "Adding Red Hat Process Automation Manager users" . Optional: To manage RH-SSO users from Business Central, you added all realm-management client roles in RH-SSO to the Business Central administrator user. Note Except for Section 18.1, "Creating the Business Central client for RH-SSO" , this section is intended for standalone installations. If you are integrating RH-SSO and Red Hat Process Automation Manager on Red Hat OpenShift Container Platform, complete only the steps in Section 18.1, "Creating the Business Central client for RH-SSO" and then deploy the Red Hat Process Automation Manager environment on Red Hat OpenShift Container Platform. For information about deploying Red Hat Process Automation Manager on Red Hat OpenShift Container Platform, see Deploying Red Hat Process Automation Manager on Red Hat OpenShift Container Platform . 18.1. Creating the Business Central client for RH-SSO After the RH-SSO server starts, use the RH-SSO Admin Console to create the Business Central client for RH-SSO. Procedure Enter http://localhost:8180/auth/admin in a web browser to open the RH-SSO Admin Console and log in using the admin credentials that you created while installing RH-SSO. Note If you are configuring RH-SSO with Red Hat OpenShift Container Platform, enter the URL that is exposed by the RH-SSO routes. Your OpenShift administrator can provide this URL if necessary. When you login for the first time, you can set up the initial user on the new user registration form. In the RH-SSO Admin Console, click the Realm Settings menu item. On the Realm Settings page, click Add Realm . The Add realm page opens. On the Add realm page, provide a name for the realm and click Create . Click the Clients menu item and click Create . The Add Client page opens. On the Add Client page, provide the required information to create a new client for your realm. For example: Client ID : kie Client protocol : openid-connect Root URL : http:// localhost :8080/business-central Note If you are configuring RH-SSO with Red Hat OpenShift Container Platform, enter the URL that is exposed by the KIE Server routes. Your OpenShift administrator can provide this URL if necessary. Click Save to save your changes. After you create a new client, its Access Type is set to public by default. Change it to confidential . The RH-SSO server is now configured with a realm with a client for Business Central applications and running and listening for HTTP connections at localhost:8180 . This realm provides different users, roles, and sessions for Business Central applications. Note The RH-SSO server client uses one URL to a single business-central deployment. The following error message might be displayed if there are two or more deployment configurations: We are sorry... Invalid parameter: redirect_uri To resolve this error, append /* to the Valid Redirect URIs field in the client configuration. On the Configure page, go to Clients > kie > Settings , and append the Valid Redirect URIs field with /* , for example: 18.2. Installing the RH-SSO client adapter for Business Central After you install RH-SSO, you must install the RH-SSO client adapter for Red Hat JBoss EAP and configure it for Business Central. Prerequisites Business Central is installed in a Red Hat JBoss EAP 7.4 instance, as described in Installing and configuring Red Hat Process Automation Manager on Red Hat JBoss EAP 7.4 . RH-SSO is installed as described in Chapter 16, Installing and configuring RH-SSO . A user with the admin role has been added to RH-SSO as described in Section 17.1, "Adding Red Hat Process Automation Manager users" . Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required) and then select the product and version from the drop-down options: Product: Red Hat Single Sign-On Version: 7.5 Select the Patches tab. Download Red Hat Single Sign-On 7.5 Client Adapter for EAP 7 ( rh-sso-7.5.0-eap7-adapter.zip or the latest version). Extract and install the adapter zip file. For installation instructions, see the "JBoss EAP Adapter" section of the Red Hat Single Sign On Securing Applications and Services Guide . Note Install the adapter with the -Dserver.config=standalone-full.xml property. Navigate to the EAP_HOME /standalone/configuration directory in your Red Hat JBoss EAP installation and open the standalone-full.xml file in a text editor. Add the system properties listed in the following example to <system-properties> : <system-properties> <property name="org.jbpm.workbench.kie_server.keycloak" value="true"/> <property name="org.uberfire.ext.security.management.api.userManagementServices" value="KCAdapterUserManagementService"/> <property name="org.uberfire.ext.security.management.keycloak.authServer" value="http://localhost:8180/auth"/> </system-properties> Optional: If you want to use client roles, add the following system property: <property name="org.uberfire.ext.security.management.keycloak.use-resource-role-mappings" value="true"/> By default, the client resource name is kie . The client resource name must be the same as the client name that you used to configure the client in RH-SSO. If you want to use a custom client resource name, add the following system property: <property name="org.uberfire.ext.security.management.keycloak.resource" value="customClient"/> Replace customClient with the client resource name. Add the RH-SSO subsystem configuration. For example: <subsystem xmlns="urn:jboss:domain:keycloak:1.1"> <secure-deployment name="business-central.war"> <realm>demo</realm> <realm-public-key>MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCrVrCuTtArbgaZzL1hvh0xtL5mc7o0NqPVnYXkLvgcwiC3BjLGw1tGEGoJaXDuSaRllobm53JBhjx33UNv+5z/UMG4kytBWxheNVKnL6GgqlNabMaFfPLPCF8kAgKnsi79NMo+n6KnSY8YeUmec/p2vjO2NjsSAVcWEQMVhJ31LwIDAQAB</realm-public-key> <auth-server-url>http://localhost:8180/auth</auth-server-url> <ssl-required>external</ssl-required> <enable-basic-auth>true</enable-basic-auth> <resource>kie</resource> <credential name="secret">759514d0-dbb1-46ba-b7e7-ff76e63c6891</credential> <principal-attribute>preferred_username</principal-attribute> </secure-deployment> </subsystem> In this example: secure-deployment name is the name of your application's WAR file. realm is the name of the realm that you created for the applications to use. realm-public-key is the public key of the realm you created. You can find the key in the Keys tab in the Realm settings page of the realm you created in the RH-SSO Admin Console. If you do not provide a value for realm-public-key , the server retrieves it automatically. auth-server-url is the URL for the RH-SSO authentication server. enable-basic-auth is the setting to enable basic authentication mechanism, so that the clients can use both token-based and basic authentication approaches to perform the requests. resource is the name for the client that you created. To use client roles, set the client resource name that you used when configuring the client in RH-SSO. credential name is the secret key for the client you created. You can find the key in the Credentials tab on the Clients page of the RH-SSO Admin Console. principal-attribute is the attribute for displaying the user name in the application. If you do not provide this value, your User Id is displayed in the application instead of your user name. Note The RH-SSO server converts the user names to lower case. Therefore, after integration with RH-SSO, your user name will appear in lower case in Red Hat Process Automation Manager. If you have user names in upper case hard coded in business processes, the application might not be able to identify the upper case user. If you want to use client roles, also add the following setting under <secure-deployment> : <use-resource-role-mappings>true</use-resource-role-mappings> The Elytron subsystem provides a built-in policy provider based on JACC specification. To enable the JACC manually in the standalone.xml or in the file where Elytron is installed, do any of the following tasks: To create the policy provider, enter the following commands in the management command-line interface (CLI) of Red Hat JBoss EAP: For more information about the Red Hat JBoss EAP management CLI, see the Management CLI Guide for Red Hat JBoss EAP. Navigate to the EAP_HOME /standalone/configuration directory in your Red Hat JBoss EAP installation. Locate the Elytron and undertow subsystem configurations in the standalone.xml and standalone-full.xml files and enable JACC. For example: <subsystem xmlns="urn:jboss:domain:undertow:12.0" ... > ... <application-security-domains> <application-security-domain name="other" http-authentication-factory="keycloak-http-authentication"/> </application-security-domains> <subsystem xmlns="urn:jboss:domain:ejb3:9.0"> ... <application-security-domains> <application-security-domain name="other" security-domain="KeycloakDomain"/> </application-security-domains> Navigate to EAP_HOME /bin/ and enter the following command to start the Red Hat JBoss EAP server: Note You can also configure the RH-SSO adapter for Business Central by updating your application's WAR file to use the RH-SSO security subsystem. However, Red Hat recommends that you configure the adapter through the RH-SSO subsystem. Doing this updates the Red Hat JBoss EAP configuration instead of applying the configuration on each WAR file. 18.3. Enabling access to external file systems and Git repository services for Business Central using RH-SSO To enable Business Central to consume other remote services, such as file systems and Git repositories, using RH-SSO authentication, you must create a configuration file. Procedure Generate a JSON configuration file: Navigate to the RH-SSO Admin Console located at http://localhost:8180/auth/admin. Click Clients . Create a new client with the following settings: Set Client ID as kie-git . Set Access Type as confidential . Disable the Standard Flow Enabled option. Enable the Direct Access Grants Enabled option. Click Save . Click the Installation tab at the top of the client configuration screen and choose Keycloak OIDC JSON as a Format Option . Click Download . Move the downloaded JSON file to an accessible directory in the server's file system or add it to the application class path. The default name and location for this file is USDEAP_HOME/kie-git.json . Optional: In the EAP_HOME /standalone/configuration/standalone-full.xml file, under the <system-properties> tag, add the following system property: <property name="org.uberfire.ext.security.keycloak.keycloak-config-file" value="USDEAP_HOME/kie-git.json"/> Replace the USD EAP_HOME /kie-git.json value of the property with the absolute path or the class path ( classpath:/ EXAMPLE_PATH /kie-git.json ) to the new JSON configuration file. Note If you do not set the org.uberfire.ext.security.keycloak.keycloak-config-file property, Red Hat Process Automation Manager reads the USDEAP_HOME/kie-git.json file. Result All users authenticated through the RH-SSO server can clone internal GIT repositories. In the following command, replace USER_NAME with a RH-SSO user, for example admin : + Note The RH-SSO server client uses one URL to a single remote service deployment. The following error message might be displayed if there are two or more deployment configurations: We are sorry... Invalid parameter: redirect_uri To resolve this error, append /* to the Valid Redirect URIs field in the client configuration. On the Configure page, go to Clients > kie-git > Settings , and append the Valid Redirect URIs field with /* , for example:
[ "http://localhost:8080/business-central/*", "<system-properties> <property name=\"org.jbpm.workbench.kie_server.keycloak\" value=\"true\"/> <property name=\"org.uberfire.ext.security.management.api.userManagementServices\" value=\"KCAdapterUserManagementService\"/> <property name=\"org.uberfire.ext.security.management.keycloak.authServer\" value=\"http://localhost:8180/auth\"/> </system-properties>", "<property name=\"org.uberfire.ext.security.management.keycloak.use-resource-role-mappings\" value=\"true\"/>", "<property name=\"org.uberfire.ext.security.management.keycloak.resource\" value=\"customClient\"/>", "<subsystem xmlns=\"urn:jboss:domain:keycloak:1.1\"> <secure-deployment name=\"business-central.war\"> <realm>demo</realm> <realm-public-key>MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCrVrCuTtArbgaZzL1hvh0xtL5mc7o0NqPVnYXkLvgcwiC3BjLGw1tGEGoJaXDuSaRllobm53JBhjx33UNv+5z/UMG4kytBWxheNVKnL6GgqlNabMaFfPLPCF8kAgKnsi79NMo+n6KnSY8YeUmec/p2vjO2NjsSAVcWEQMVhJ31LwIDAQAB</realm-public-key> <auth-server-url>http://localhost:8180/auth</auth-server-url> <ssl-required>external</ssl-required> <enable-basic-auth>true</enable-basic-auth> <resource>kie</resource> <credential name=\"secret\">759514d0-dbb1-46ba-b7e7-ff76e63c6891</credential> <principal-attribute>preferred_username</principal-attribute> </secure-deployment> </subsystem>", "<use-resource-role-mappings>true</use-resource-role-mappings>", "/subsystem=undertow/application-security-domain=other:remove() /subsystem=undertow/application-security-domain=other:add(http-authentication-factory=\"keycloak-http-authentication\") /subsystem=ejb3/application-security-domain=other:write-attribute(name=security-domain, value=KeycloakDomain)", "<subsystem xmlns=\"urn:jboss:domain:undertow:12.0\" ... > <application-security-domains> <application-security-domain name=\"other\" http-authentication-factory=\"keycloak-http-authentication\"/> </application-security-domains>", "<subsystem xmlns=\"urn:jboss:domain:ejb3:9.0\"> <application-security-domains> <application-security-domain name=\"other\" security-domain=\"KeycloakDomain\"/> </application-security-domains>", "./standalone.sh -c standalone-full.xml", "<property name=\"org.uberfire.ext.security.keycloak.keycloak-config-file\" value=\"USDEAP_HOME/kie-git.json\"/>", "git clone ssh://USER_NAME@localhost:8001/system", "http://localhost:8080/remote-system/*" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/integrating_red_hat_process_automation_manager_with_other_products_and_components/sso-central-proc_integrate-sso
Chapter 6. LVM Configuration Examples
Chapter 6. LVM Configuration Examples This chapter provides some basic LVM configuration examples. 6.1. Creating an LVM Logical Volume on Three Disks This example creates an LVM logical volume called new_logical_volume that consists of the disks at /dev/sda1 , /dev/sdb1 , and /dev/sdc1 . 6.1.1. Creating the Physical Volumes To use disks in a volume group, you label them as LVM physical volumes. Warning This command destroys any data on /dev/sda1 , /dev/sdb1 , and /dev/sdc1 . 6.1.2. Creating the Volume Group The following command creates the volume group new_vol_group . You can use the vgs command to display the attributes of the new volume group. 6.1.3. Creating the Logical Volume The following command creates the logical volume new_logical_volume from the volume group new_vol_group . This example creates a logical volume that uses 2GB of the volume group. 6.1.4. Creating the File System The following command creates a GFS2 file system on the logical volume. The following commands mount the logical volume and report the file system disk space usage.
[ "pvcreate /dev/sda1 /dev/sdb1 /dev/sdc1 Physical volume \"/dev/sda1\" successfully created Physical volume \"/dev/sdb1\" successfully created Physical volume \"/dev/sdc1\" successfully created", "vgcreate new_vol_group /dev/sda1 /dev/sdb1 /dev/sdc1 Volume group \"new_vol_group\" successfully created", "vgs VG #PV #LV #SN Attr VSize VFree new_vol_group 3 0 0 wz--n- 51.45G 51.45G", "lvcreate -L2G -n new_logical_volume new_vol_group Logical volume \"new_logical_volume\" created", "mkfs.gfs2 -plock_nolock -j 1 /dev/new_vol_group/new_logical_volume This will destroy any data on /dev/new_vol_group/new_logical_volume. Are you sure you want to proceed? [y/n] y Device: /dev/new_vol_group/new_logical_volume Blocksize: 4096 Filesystem Size: 491460 Journals: 1 Resource Groups: 8 Locking Protocol: lock_nolock Lock Table: Syncing All Done", "mount /dev/new_vol_group/new_logical_volume /mnt df Filesystem 1K-blocks Used Available Use% Mounted on /dev/new_vol_group/new_logical_volume 1965840 20 1965820 1% /mnt" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/LVM_examples
Chapter 67. Stub
Chapter 67. Stub Both producer and consumer are supported The Stub component provides a simple way to stub out any physical endpoints while in development or testing, allowing you for example to run a route without needing to actually connect to a specific specific SMTP or HTTP endpoint. Just add stub: in front of any endpoint URI to stub out the endpoint. Internally the Stub component creates VM endpoints. The main difference between Stub and VM is that VM will validate the URI and parameters you give it, so putting vm: in front of a typical URI with query arguments will usually fail. Stub won't though, as it basically ignores all query parameters to let you quickly stub out one or more endpoints in your route temporarily. 67.1. URI format Where someUri can be any URI with any query parameters. 67.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 67.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 67.2.1.1. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 67.3. Component Options The Stub component supports 10 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean concurrentConsumers (consumer) Sets the default number of concurrent threads processing exchanges. 1 int defaultPollTimeout (consumer (advanced)) The timeout (in milliseconds) used when polling. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown. 1000 int defaultBlockWhenFull (producer) Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will instead block and wait until the message can be accepted. false boolean defaultDiscardWhenFull (producer) Whether a thread that sends messages to a full SEDA queue will be discarded. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will give up sending and continue, meaning that the message was not sent to the SEDA queue. false boolean defaultOfferTimeout (producer) Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, where a configured timeout can be added to the block case. Utilizing the .offer(timeout) method of the underlining java queue. long lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean defaultQueueFactory (advanced) Sets the default queue factory. BlockingQueueFactory queueSize (advanced) Sets the default maximum capacity of the SEDA queue (i.e., the number of messages it can hold). 1000 int 67.4. Endpoint Options The Stub endpoint is configured using URI syntax: with the following path and query parameters: 67.4.1. Path Parameters (1 parameters) Name Description Default Type name (common) Required Name of queue. String 67.4.2. Query Parameters (18 parameters) Name Description Default Type size (common) The maximum capacity of the SEDA queue (i.e., the number of messages it can hold). Will by default use the defaultSize set on the SEDA component. 1000 int bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean concurrentConsumers (consumer) Number of concurrent threads processing exchanges. 1 int exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern limitConcurrentConsumers (consumer (advanced)) Whether to limit the number of concurrentConsumers to the maximum of 500. By default, an exception will be thrown if an endpoint is configured with a greater number. You can disable that check by turning this option off. true boolean multipleConsumers (consumer (advanced)) Specifies whether multiple consumers are allowed. If enabled, you can use SEDA for Publish-Subscribe messaging. That is, you can send a message to the SEDA queue and have each consumer receive a copy of the message. When enabled, this option should be specified on every consumer endpoint. false boolean pollTimeout (consumer (advanced)) The timeout (in milliseconds) used when polling. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown. 1000 int purgeWhenStopping (consumer (advanced)) Whether to purge the task queue when stopping the consumer/route. This allows to stop faster, as any pending messages on the queue is discarded. false boolean blockWhenFull (producer) Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will instead block and wait until the message can be accepted. false boolean discardIfNoConsumers (producer) Whether the producer should discard the message (do not add the message to the queue), when sending to a queue with no active consumers. Only one of the options discardIfNoConsumers and failIfNoConsumers can be enabled at the same time. false boolean discardWhenFull (producer) Whether a thread that sends messages to a full SEDA queue will be discarded. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will give up sending and continue, meaning that the message was not sent to the SEDA queue. false boolean failIfNoConsumers (producer) Whether the producer should fail by throwing an exception, when sending to a queue with no active consumers. Only one of the options discardIfNoConsumers and failIfNoConsumers can be enabled at the same time. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean offerTimeout (producer) Offer timeout (in milliseconds) can be added to the block case when queue is full. You can disable timeout by using 0 or a negative value. long timeout (producer) Timeout (in milliseconds) before a SEDA producer will stop waiting for an asynchronous task to complete. You can disable timeout by using 0 or a negative value. 30000 long waitForTaskToComplete (producer) Option to specify whether the caller should wait for the async task to complete or not before continuing. The following three options are supported: Always, Never or IfReplyExpected. The first two values are self-explanatory. The last value, IfReplyExpected, will only wait if the message is Request Reply based. The default option is IfReplyExpected. Enum values: Never IfReplyExpected Always IfReplyExpected WaitForTaskToComplete queue (advanced) Define the queue instance which will be used by the endpoint. BlockingQueue 67.5. Examples Here are a few samples of stubbing endpoint uris 67.6. Spring Boot Auto-Configuration When using stub with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-stub-starter</artifactId> </dependency> The component supports 11 options, which are listed below. Name Description Default Type camel.component.stub.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.stub.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.stub.concurrent-consumers Sets the default number of concurrent threads processing exchanges. 1 Integer camel.component.stub.default-block-when-full Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will instead block and wait until the message can be accepted. false Boolean camel.component.stub.default-discard-when-full Whether a thread that sends messages to a full SEDA queue will be discarded. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will give up sending and continue, meaning that the message was not sent to the SEDA queue. false Boolean camel.component.stub.default-offer-timeout Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, where a configured timeout can be added to the block case. Utilizing the .offer(timeout) method of the underlining java queue. Long camel.component.stub.default-poll-timeout The timeout (in milliseconds) used when polling. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown. 1000 Integer camel.component.stub.default-queue-factory Sets the default queue factory. The option is a org.apache.camel.component.seda.BlockingQueueFactory<org.apache.camel.Exchange> type. BlockingQueueFactory camel.component.stub.enabled Whether to enable auto configuration of the stub component. This is enabled by default. Boolean camel.component.stub.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.stub.queue-size Sets the default maximum capacity of the SEDA queue (i.e., the number of messages it can hold). 1000 Integer
[ "stub:someUri", "stub:name", "stub:smtp://somehost.foo.com?user=whatnot&something=else stub:http://somehost.bar.com/something", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-stub-starter</artifactId> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-stub-component-starter
11.5. Sub-collections
11.5. Sub-collections 11.5.1. Network VNIC Profile Sub-Collection VNIC (Virtual Network Interface Controller) profiles, also referred to as virtual machine interface profiles, are customized profiles applied to users and groups to limit network bandwidth. Each vnicprofile contains the following elements: Table 11.2. Elements for vnic profiles Element Type Description name string The unique identifier for the profile. description string A plain text description of the profile. network string The unique identifier of the logical network to which the profile applies. port_mirroring Boolean: true or false The default is false . Example 11.6. An XML representation of the network's vnicprofile sub-collection 11.5.2. Network Labels Sub-Collection Network labels are plain text, human-readable labels that allow you to automate the association of logical networks with physical host network interfaces. Each label contains the following elements: Table 11.3. Elements for labels Element Type Description network string The href and id of the networks to which the label is attached. Example 11.7. An XML representation of the network's labels sub-collection 11.5.3. Methods 11.5.3.1. Attach Label to Logical Network Action You can attach labels to a logical network to automate the association of that logical network with physical host network interfaces to which the same label has been attached. Example 11.8. Action to attach a label to a logical network 11.5.3.2. Removing a Label From a Logical Network Removal of a label from a logical network requires a DELETE request. Example 11.9. Removing a label from a logical network
[ "<vnic_profile href= \"/ovirt-engine/api/vnicprofiles/f9c2f9f1-3ae2-4100-a9a5-285ebb755c0d\" id=\"f9c2f9f1-3ae2-4100-a9a5-285ebb755c0d\"> <name>Peanuts</name> <description>shelled</description> <network href= \"/ovirt-engine/api/networks/00000000-0000-0000-0000-000000000009\" id=\"00000000-0000-0000-0000-000000000009\"/> <port_mirroring>false</port_mirroring> </vnic_profile> </vnic_profiles>", "<labels> <label href=\"/ovirt-engine/api/networks/00000000-0000-0000-0000-000000000000/labels/eth0\" id=\"eth0\"> <network href=\"/ovirt-engine/api/networks/00000000-0000-0000-0000-000000000000\" id=\"00000000-0000-0000-0000-000000000000\"/> </label> </labels>", "POST /ovirt-engine/api/networks/00000000-0000-0000-0000-000000000000/labels/ HTTP/1.1 Accept: application/xml Content-type: application/xml <label id=\"Label_001\" />", "DELETE /ovirt-engine/api/networks/00000000-0000-0000-0000-000000000000/labels/ [label_id] HTTP/1.1 HTTP/1.1 204 No Content" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/sect-sub-collections
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate and prioritize your feedback regarding our documentation. Provide as much detail as possible, so that your request can be quickly addressed. Prerequisites You are logged in to the Red Hat Customer Portal. Procedure To provide feedback, perform the following steps: Click the following link: Create Issue . Describe the issue or enhancement in the Summary text box. Provide details about the issue or requested enhancement in the Description text box. Type your name in the Reporter text box. Click the Create button. This action creates a documentation ticket and routes it to the appropriate documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/hybrid_committed_spend/1-latest/html/integrating_microsoft_azure_data_into_hybrid_committed_spend/proc-providing-feedback-on-redhat-documentation
2.2. Defining Directory Needs
2.2. Defining Directory Needs When designing the directory data, think not only of the data that is currently required but also how the directory (and organization) is going to change over time. Considering the future needs of the directory during the design process influences how the data in the directory are structured and distributed. Look at these points: What should be put in the directory today? What immediate problem is solved by deploying a directory? What are the immediate needs of the directory-enabled application being used? What information is going to be added to the directory in the near future? For example, an enterprise might use an accounting package that does not currently support LDAP but will be LDAP-enabled in a few months. Identify the data used by LDAP-compatible applications, and plan for the migration of the data into the directory as the technology becomes available. What information might be stored in the directory in the future? For example, a hosting company may have future customers with different data requirements than their current customers, such as needing to store images or media files. While this is the hardest answer to anticipate, doing so may pay off in unexpected ways. At a minimum, this kind of planning helps identify data sources that might not otherwise have been considered.
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/planning_directory_data-defining_directory_needs
27.3. Defining the Console
27.3. Defining the Console The pam_console.so module uses the /etc/security/console.perms file to determine the permissions for users at the system console. The syntax of the file is very flexible; you can edit the file so that these instructions no longer apply. However, the default file has a line that looks like this: When users log in, they are attached to some sort of named terminal, either an X server with a name like :0 or mymachine.example.com:1.0 , or a device like /dev/ttyS0 or /dev/pts/2 . The default is to define that local virtual consoles and local X servers are considered local, but if you want to consider the serial terminal to you on port /dev/ttyS1 to also be local, you can change that line to read:
[ "<console>=tty[0-9][0-9]* vc/[0-9][0-9]* :[0-9]\\.[0-9] :[0-9]", "<console>=tty[0-9][0-9]* vc/[0-9][0-9]* :[0-9]\\.[0-9] :[0-9] /dev/ttyS1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/console_access-defining_the_console
Release Notes for Red Hat build of Apache Camel for Spring Boot
Release Notes for Red Hat build of Apache Camel for Spring Boot Red Hat build of Apache Camel 4.8 What's new in Red Hat build of Apache Camel Red Hat build of Apache Camel Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/release_notes_for_red_hat_build_of_apache_camel_for_spring_boot/index
19.5. Installing and Configuring Red Hat Single Sign-On
19.5. Installing and Configuring Red Hat Single Sign-On To use Red Had Single Sign-On as your authorization method, you need to: Install Red Hat SSO. Configure the LDAP group mapper. Configure Apache on the Manager. Configure OVN provider credentials. Note If Red Hat SSO is configured, LDAP sign ons will not work, as only a single authorization protocol may be used at a time. 19.5.1. Installing Red Hat Single Sign-On You can install Red Hat Single Sign-On by downloading a ZIP file and unpacking it, or by using an RPM file. Follow the installation instructions at Red Hat SSO Installation Prepare the following information: Path/location of the Open ID Connect server. The subscription channel for the correct repositories. Valid Red Hat subscription login credentials. 19.5.2. Configuring the LDAP group mapper Add the LDAP groups mapper with the following information: Name : ldapgroups Mapper Type : group-ldap-mapper LDAP Groups DN : ou=groups,dc=example,dc=com Group Object Classes : groupofuniquenames ( adapt this class according to your LDAP server setup ) Membership LDAP Attribute : uniquemember ( adapt this class according to your LDAP server setup ) Click Save . Click Sync LDAP Groups to KeyCloak . At the bottom of the User Federation Provider page, click Synchronize all users . In the Clients tab, under Add Client , add ovirt-engine as the Client ID , and enter the engine url as the Root URL . Modify the Client Protocol to openid-connect and the Access Type to confidential . In the Clients tab, under Ovirt-engine > Advanced Settings , increase the Access Token Lifespan . Add https://rhvm.example.com:443/* as a valid redirect URI. The client secret is generated, and can be viewed in the Credentials tab. In the Clients tab under Create Mapper Protocol , create a mapper with the following settings: Name : groups Mapper Type : Group Membership Token Claim Name : groups Full group path : ON Add to ID token : ON Add to access token : ON Add to userinfo : ON Add the Builtin Protocol Mapper for username . Create the scopes needed by ovirt-engine , ovirt-app-api and ovirt-app-admin . Use the scopes created in the step to set up optional client scopes for the ovirt-engine client. 19.5.3. Configuring Apache in the Manager Configure Apache in the Manager. Create a new httpd config file ovirt-openidc.conf in /etc/httpd/conf.d with the following content: To save the configuration changes, restart httpd and ovirt-engine : Create the file openidc-authn.properties in /etc/ovirt-engine/extensions.d/ with the following content: Create the file openidc-http-mapping.properties in /etc/ovirt-engine/extensions.d/ with the following content: Create the file openidc-authz.properties in /etc/ovirt-engine/extensions.d/ with the following content: Create the file 99-enable-external-auth.conf in /etc/ovirt-engine/engine.conf.d/ with the following content: 19.5.4. Configuring OVN If you configured the ovirt-ovn-provider in the Manager, you need to configure the OVN provider credentials. Create the file 20-setup-ovirt-provider-ovn.conf in /etc/ovirt-provider-ovn/conf.d/ with the following content, where user1 belongs to the LDAP group ovirt-administrator , and openidchttp is the profile configured for aaa-ldap-misc . Restart the ovirt-provider-ovn : Log in to the Administration Portal, navigate to Administration Providers , select ovirt-provider-ovn , and click Edit to update the password for the ovn provider.
[ "yum install mod_auth_openidc", "LoadModule auth_openidc_module modules/mod_auth_openidc.so OIDCProviderMetadataURL https://SSO.example.com/auth/realms/master/.well-known/openid-configuration OIDCSSLValidateServer Off OIDCClientID ovirt-engine OIDCClientSecret <client_SSO _generated_key> OIDCRedirectURI https://rhvm.example.com/ovirt-engine/callback OIDCDefaultURL https://rhvm.example.com/ovirt-engine/login?scope=ovirt-app-admin+ovirt-app-portal+ovirt-ext%3Dauth%3Asequence-priority%3D%7E maps the prefered_username claim to the REMOTE_USER environment variable: OIDCRemoteUserClaim <preferred_username> OIDCCryptoPassphrase <random1234> <LocationMatch ^/ovirt-engine/sso/(interactive-login-negotiate|oauth/token-http-auth)|^/ovirt-engine/callback> <If \"req('Authorization') !~ /^(Bearer|Basic)/i\"> Require valid-user AuthType openid-connect ErrorDocument 401 \"<html><meta http-equiv=\\\"refresh\\\" content=\\\"0; url=/ovirt-engine/sso/login-unauthorized\\\"/><body><a href=\\\"/ovirt-engine/sso/login-unauthorized\\\">Here</a></body></html>\" </If> </LocationMatch> OIDCOAuthIntrospectionEndpoint https://SSO.example.com/auth/realms/master/protocol/openid-connect/token/introspect OIDCOAuthSSLValidateServer Off OIDCOAuthIntrospectionEndpointParams token_type_hint=access_token OIDCOAuthClientID ovirt-engine OIDCOAuthClientSecret <client_SSO _generated_key> OIDCOAuthRemoteUserClaim sub <LocationMatch ^/ovirt-engine/(apiUSD|api/)> AuthType oauth20 Require valid-user </LocationMatch>", "systemctl restart httpd systemctl restart ovirt-engine", "ovirt.engine.extension.name = openidc-authn ovirt.engine.extension.bindings.method = jbossmodule ovirt.engine.extension.binding.jbossmodule.module = org.ovirt.engine-extensions.aaa.misc ovirt.engine.extension.binding.jbossmodule.class = org.ovirt.engineextensions.aaa.misc.http.AuthnExtension ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authn ovirt.engine.aaa.authn.profile.name = openidchttp ovirt.engine.aaa.authn.authz.plugin = openidc-authz ovirt.engine.aaa.authn.mapping.plugin = openidc-http-mapping config.artifact.name = HEADER config.artifact.arg = OIDC_CLAIM_preferred_username", "ovirt.engine.extension.name = openidc-http-mapping ovirt.engine.extension.bindings.method = jbossmodule ovirt.engine.extension.binding.jbossmodule.module = org.ovirt.engine-extensions.aaa.misc ovirt.engine.extension.binding.jbossmodule.class = org.ovirt.engineextensions.aaa.misc.mapping.MappingExtension ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Mapping config.mapAuthRecord.type = regex config.mapAuthRecord.regex.mustMatch = false config.mapAuthRecord.regex.pattern = ^(?<user>.*?)((\\\\\\\\(?<at>@)(?<suffix>.*?)@.*)|(?<realm>@.*))USD config.mapAuthRecord.regex.replacement = USD{user}USD{at}USD{suffix}", "ovirt.engine.extension.name = openidc-authz ovirt.engine.extension.bindings.method = jbossmodule ovirt.engine.extension.binding.jbossmodule.module = org.ovirt.engine-extensions.aaa.misc ovirt.engine.extension.binding.jbossmodule.class = org.ovirt.engineextensions.aaa.misc.http.AuthzExtension ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authz config.artifact.name.arg = OIDC_CLAIM_preferred_username config.artifact.groups.arg = OIDC_CLAIM_groups", "ENGINE_SSO_ENABLE_EXTERNAL_SSO=true ENGINE_SSO_EXTERNAL_SSO_LOGOUT_URI=\"USD{ENGINE_URI}/callback\" EXTERNAL_OIDC_USER_INFO_END_POINT=https://SSO.example.com/auth/realms/master/protocol/openid-connect/userinfo EXTERNAL_OIDC_TOKEN_END_POINT=https://SSO.example.com/auth/realms/master/protocol/openid-connect/token EXTERNAL_OIDC_LOGOUT_END_POINT=https://SSO.example.com/auth/realms/master/protocol/openid-connect/logout EXTERNAL_OIDC_CLIENT_ID=ovirt-engine EXTERNAL_OIDC_CLIENT_SECRET=\"<client_SSO _generated_key>\" EXTERNAL_OIDC_HTTPS_PKI_TRUST_STORE=\"/etc/pki/java/cacerts\" EXTERNAL_OIDC_HTTPS_PKI_TRUST_STORE_PASSWORD=\"\" EXTERNAL_OIDC_SSL_VERIFY_CHAIN=false EXTERNAL_OIDC_SSL_VERIFY_HOST=false", "ovirt-admin-user-name=user1@openidchttp", "systemctl restart ovirt-provider-ovn" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/configuring_red_hat_sso
Chapter 15. Configuring real-time compute
Chapter 15. Configuring real-time compute As a cloud administrator, you might need instances on your Compute nodes to adhere to low-latency policies and perform real-time processing. Real-time Compute nodes include a real-time capable kernel, specific virtualization modules, and optimized deployment parameters, to facilitate real-time processing requirements and minimize latency. The process to enable Real-time Compute includes: configuring the BIOS settings of the Compute nodes building a real-time image with real-time kernel and Real-Time KVM (RT-KVM) kernel module assigning the ComputeRealTime role to the Compute nodes For a use-case example of real-time Compute deployment for NFV workloads, see the Example: Configuring OVS-DPDK with ODL and VXLAN tunnelling section in the Network Functions Virtualization Planning and Configuration Guide . Note Real-time Compute nodes are supported only with Red Hat Enterprise Linux version 7.5 or later. 15.1. Preparing Compute nodes for real-time Before you can deploy Real-time Compute in your overcloud, you must enable Red Hat Enterprise Linux Real-Time KVM (RT-KVM), configure your BIOS to support real-time, and build the real-time overcloud image. Prerequisites You must use Red Hat certified servers for your RT-KVM Compute nodes. See Red Hat Enterprise Linux for Real Time 7 certified servers for details. You need a separate subscription to Red Hat OpenStack Platform for Real Time to access the rhel-8-for-x86_64-nfv-rpms repository. For details on managing repositories and subscriptions for your undercloud, see Registering the undercloud and attaching subscriptions in the Director Installation and Usage guide. Procedure To build the real-time overcloud image, you must enable the rhel-8-for-x86_64-nfv-rpms repository for RT-KVM. To check which packages will be installed from the repository, enter the following command: To build the overcloud image for Real-time Compute nodes, install the libguestfs-tools package on the undercloud to get the virt-customize tool: Important If you install the libguestfs-tools package on the undercloud, disable iscsid.socket to avoid port conflicts with the tripleo_iscsid service on the undercloud: Extract the images: Copy the default image: Register the image and configure the required subscriptions: Replace the username and password values with your Red Hat customer account details. For general information about building a Real-time overcloud image, see the knowledgebase article Modifying the Red Hat Enterprise Linux OpenStack Platform Overcloud Image with virt-customize . Find the SKU of the Red Hat OpenStack Platform for Real Time subscription. The SKU might be located on a system that is already registered to the Red Hat Subscription Manager with the same account and credentials: Attach the Red Hat OpenStack Platform for Real Time subscription to the image: Create a script to configure rt on the image: Run the script to configure the real-time image: Re-label SELinux: Extract vmlinuz and initrd . For example: Note The software version in the vmlinuz and initramfs filenames vary with the kernel version. Upload the image: You now have a real-time image you can use with the ComputeRealTime composable role on select Compute nodes. To reduce latency on your Real-time Compute nodes, you must modify the BIOS settings in the Compute nodes. You should disable all options for the following components in your Compute node BIOS settings: Power Management Hyper-Threading CPU sleep states Logical processors See Setting BIOS parameters for descriptions of these settings and the impact of disabling them. See your hardware manufacturer documentation for complete details on how to change BIOS settings. 15.2. Deploying the Real-time Compute role Red Hat OpenStack Platform (RHOSP) director provides the template for the ComputeRealTime role, which you can use to deploy real-time Compute nodes. You must perform additional steps to designate Compute nodes for real-time. Procedure Based on the /usr/share/openstack-tripleo-heat-templates/environments/compute-real-time-example.yaml file, create a compute-real-time.yaml environment file that sets the parameters for the ComputeRealTime role. The file must include values for the following parameters: IsolCpusList and NovaComputeCpuDedicatedSet : List of isolated CPU cores and virtual CPU pins to reserve for real-time workloads. This value depends on the CPU hardware of your real-time Compute nodes. NovaComputeCpuSharedSet : List of host CPUs to reserve for emulator threads. KernelArgs : Arguments to pass to the kernel of the Real-time Compute nodes. For example, you can use default_hugepagesz=1G hugepagesz=1G hugepages=<number_of_1G_pages_to_reserve> hugepagesz=2M hugepages=<number_of_2M_pages> to define the memory requirements of guests that have huge pages with multiple sizes. In this example, the default size is 1GB but you can also reserve 2M huge pages. NovaComputeDisableIrqBalance : Ensure that this parameter is set to true for Real-time Compute nodes, because the tuned service manages IRQ balancing for real-time deployments instead of the irqbalance service. Add the ComputeRealTime role to your roles data file and regenerate the file. For example: This command generates a ComputeRealTime role with contents similar to the following example, and also sets the ImageDefault option to overcloud-realtime-compute . For general information about custom roles and about the roles-data.yaml , see Roles . Create the compute-realtime flavor to tag nodes that you want to designate for real-time workloads. For example: Tag each node that you want to designate for real-time workloads with the compute-realtime profile. Map the ComputeRealTime role to the compute-realtime flavor by creating an environment file with the following content: Add your environment files and the new roles file to the stack with your other environment files and deploy the overcloud: 15.3. Sample deployment and testing scenario The following example procedure uses a simple single-node deployment to test that the environment variables and other supporting configuration is set up correctly. Actual performance results might vary, depending on the number of nodes and instances that you deploy in your cloud. Procedure Create the compute-real-time.yaml file with the following parameters: Create a new rt_roles_data.yaml file with the ComputeRealTime role: Add compute-real-time.yaml and rt_roles_data.yaml to the stack with your other environment files and deploy the overcloud: This command deploys one Controller node and one Real-time Compute node. Log into the Real-time Compute node and check the following parameters: 15.4. Launching and tuning real-time instances After you deploy and configure Real-time Compute nodes, you can launch real-time instances on those nodes. You can further configure these real-time instances with CPU pinning, NUMA topology filters, and huge pages. Prerequisites The compute-realtime flavor exists on the overcloud, as described in Deploying the Real-time Compute role . Procedure Launch the real-time instance: Optional: Verify that the instance uses the assigned emulator threads: Pinning CPUs and setting emulator thread policy To ensure that there are enough CPUs on each Real-time Compute node for real-time workloads, you need to pin at least one virtual CPU (vCPU) for an instance to a physical CPU (pCPUs) on the host. The emulator threads for that vCPU then remain dedicated to that pCPU. Configure your flavor to use a dedicated CPU policy. To do so, set the hw:cpu_policy parameter to dedicated on the flavor. For example: Note Make sure that your resources quota has enough pCPUs for the Real-time Compute nodes to consume. Optimizing your network configuration Depending on the needs of your deployment, you might need to set parameters in the network-environment.yaml file to tune your network for certain real-time workloads. To review an example configuration optimized for OVS-DPDK, see the Configuring the OVS-DPDK parameters section of the Network Functions Virtualization Planning and Configuration Guide . Configuring huge pages It is recommended to set the default huge pages size to 1GB. Otherwise, TLB flushes might create jitter in the vCPU execution. For general information about using huge pages, see the Running DPDK applications web page. Disabling Performance Monitoring Unit (PMU) emulation Instances can provide PMU metrics by specifying an image or flavor with a vPMU. Providing PMU metrics introduces latency. Note The vPMU defaults to enabled when NovaLibvirtCPUMode is set to host-passthrough . If you do not need PMU metrics, then disable the vPMU to reduce latency by setting the PMU property to "False" in the image or flavor used to create the instance: Image: hw_pmu=False Flavor: hw:pmu=False
[ "dnf repo-pkgs rhel-8-for-x86_64-nfv-rpms list Loaded plugins: product-id, search-disabled-repos, subscription-manager Available Packages kernel-rt.x86_64 4.18.0-80.7.1.rt9.153.el8_0 rhel-8-for-x86_64-nfv-rpms kernel-rt-debug.x86_64 4.18.0-80.7.1.rt9.153.el8_0 rhel-8-for-x86_64-nfv-rpms kernel-rt-debug-devel.x86_64 4.18.0-80.7.1.rt9.153.el8_0 rhel-8-for-x86_64-nfv-rpms kernel-rt-debug-kvm.x86_64 4.18.0-80.7.1.rt9.153.el8_0 rhel-8-for-x86_64-nfv-rpms kernel-rt-devel.x86_64 4.18.0-80.7.1.rt9.153.el8_0 rhel-8-for-x86_64-nfv-rpms kernel-rt-doc.noarch 4.18.0-80.7.1.rt9.153.el8_0 rhel-8-for-x86_64-nfv-rpms kernel-rt-kvm.x86_64 4.18.0-80.7.1.rt9.153.el8_0 rhel-8-for-x86_64-nfv-rpms [ output omitted...]", "(undercloud)USD sudo dnf install libguestfs-tools", "sudo systemctl disable --now iscsid.socket", "(undercloud)USD tar -xf /usr/share/rhosp-director-images/overcloud-full.tar (undercloud)USD tar -xf /usr/share/rhosp-director-images/ironic-python-agent.tar", "(undercloud)USD cp overcloud-full.qcow2 overcloud-realtime-compute.qcow2", "(undercloud)USD virt-customize -a overcloud-realtime-compute.qcow2 --run-command 'subscription-manager register --username=<username> --password=<password>' [ 0.0] Examining the guest [ 10.0] Setting a random seed [ 10.0] Running: subscription-manager register --username=<username> --password=<password> [ 24.0] Finishing off", "sudo subscription-manager list", "(undercloud)USD virt-customize -a overcloud-realtime-compute.qcow2 --run-command 'subscription-manager attach --pool [subscription-pool]'", "(undercloud)USD cat rt.sh #!/bin/bash set -eux subscription-manager repos --enable=[REPO_ID] dnf -v -y --setopt=protected_packages= erase kernel.USD(uname -m) dnf -v -y install kernel-rt kernel-rt-kvm tuned-profiles-nfv-host # END OF SCRIPT", "(undercloud)USD virt-customize -a overcloud-realtime-compute.qcow2 -v --run rt.sh 2>&1 | tee virt-customize.log", "(undercloud)USD virt-customize -a overcloud-realtime-compute.qcow2 --selinux-relabel", "(undercloud)USD mkdir image (undercloud)USD guestmount -a overcloud-realtime-compute.qcow2 -i --ro image (undercloud)USD cp image/boot/vmlinuz-4.18.0-80.7.1.rt9.153.el8_0.x86_64 ./overcloud-realtime-compute.vmlinuz (undercloud)USD cp image/boot/initramfs-4.18.0-80.7.1.rt9.153.el8_0.x86_64.img ./overcloud-realtime-compute.initrd (undercloud)USD guestunmount image", "(undercloud)USD openstack overcloud image upload --update-existing --os-image-name overcloud-realtime-compute.qcow2", "cp /usr/share/openstack-tripleo-heat-templates/environments/compute-real-time-example.yaml /home/stack/templates/compute-real-time.yaml", "openstack overcloud roles generate -o /home/stack/templates/rt_roles_data.yaml Controller Compute ComputeRealTime", "- name: ComputeRealTime description: | Compute role that is optimized for real-time behaviour. When using this role it is mandatory that an overcloud-realtime-compute image is available and the role specific parameters IsolCpusList, NovaComputeCpuDedicatedSet and NovaComputeCpuSharedSet are set accordingly to the hardware of the real-time compute nodes. CountDefault: 1 networks: InternalApi: subnet: internal_api_subnet Tenant: subnet: tenant_subnet Storage: subnet: storage_subnet HostnameFormatDefault: '%stackname%-computerealtime-%index%' ImageDefault: overcloud-realtime-compute RoleParametersDefault: TunedProfileName: \"realtime-virtual-host\" KernelArgs: \"\" # these must be set in an environment file IsolCpusList: \"\" # or similar according to the hardware NovaComputeCpuDedicatedSet: \"\" # of real-time nodes NovaComputeCpuSharedSet: \"\" # NovaLibvirtMemStatsPeriodSeconds: 0 ServicesDefault: - OS::TripleO::Services::Aide - OS::TripleO::Services::AuditD - OS::TripleO::Services::BootParams - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient - OS::TripleO::Services::CephExternal - OS::TripleO::Services::CertmongerUser - OS::TripleO::Services::Collectd - OS::TripleO::Services::ComputeCeilometerAgent - OS::TripleO::Services::ComputeNeutronCorePlugin - OS::TripleO::Services::ComputeNeutronL3Agent - OS::TripleO::Services::ComputeNeutronMetadataAgent - OS::TripleO::Services::ComputeNeutronOvsAgent - OS::TripleO::Services::Docker - OS::TripleO::Services::Fluentd - OS::TripleO::Services::IpaClient - OS::TripleO::Services::Ipsec - OS::TripleO::Services::Iscsid - OS::TripleO::Services::Kernel - OS::TripleO::Services::LoginDefs - OS::TripleO::Services::MetricsQdr - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::NeutronBgpVpnBagpipe - OS::TripleO::Services::NeutronLinuxbridgeAgent - OS::TripleO::Services::NeutronVppAgent - OS::TripleO::Services::NovaCompute - OS::TripleO::Services::NovaLibvirt - OS::TripleO::Services::NovaLibvirtGuests - OS::TripleO::Services::NovaMigrationTarget - OS::TripleO::Services::ContainersLogrotateCrond - OS::TripleO::Services::OpenDaylightOvs - OS::TripleO::Services::Podman - OS::TripleO::Services::Rhsm - OS::TripleO::Services::RsyslogSidecar - OS::TripleO::Services::Securetty - OS::TripleO::Services::SensuClient - OS::TripleO::Services::SkydiveAgent - OS::TripleO::Services::Snmp - OS::TripleO::Services::Sshd - OS::TripleO::Services::Timesync - OS::TripleO::Services::Timezone - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::TripleoPackages - OS::TripleO::Services::Vpp - OS::TripleO::Services::OVNController - OS::TripleO::Services::OVNMetadataAgent", "source ~/stackrc openstack flavor create --id auto --ram 6144 --disk 40 --vcpus 4 compute-realtime openstack flavor set --property \"cpu_arch\"=\"x86_64\" --property \"capabilities:boot_option\"=\"local\" --property \"capabilities:profile\"=\"compute-realtime\" compute-realtime", "openstack baremetal node set --property capabilities='profile:compute-realtime,boot_option:local' <node_uuid>", "parameter_defaults: OvercloudComputeRealTimeFlavor: compute-realtime", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -r /home/stack/templates/rt~/my_roles_data.yaml -e home/stack/templates/compute-real-time.yaml", "parameter_defaults: ComputeRealTimeParameters: IsolCpusList: \"1\" NovaComputeCpuDedicatedSet: \"1\" NovaComputeCpuSharedSet: \"0\" KernelArgs: \"default_hugepagesz=1G hugepagesz=1G hugepages=16\" NovaComputeDisableIrqBalance: true", "openstack overcloud roles generate -o ~/rt_roles_data.yaml Controller ComputeRealTime", "(undercloud)USD openstack overcloud deploy --templates -r /home/stack/rt_roles_data.yaml -e [your environment files] -e /home/stack/templates/compute-real-time.yaml", "uname -a Linux overcloud-computerealtime-0 4.18.0-80.7.1.rt9.153.el8_0.x86_64 #1 SMP PREEMPT RT Wed Dec 13 13:37:53 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux cat /proc/cmdline BOOT_IMAGE=/boot/vmlinuz-4.18.0-80.7.1.rt9.153.el8_0.x86_64 root=UUID=45ae42d0-58e7-44fe-b5b1-993fe97b760f ro console=tty0 crashkernel=auto console=ttyS0,115200 default_hugepagesz= 1G hugepagesz= 1G hugepages= 16 tuned-adm active Current active profile: realtime-virtual-host grep ^isolated_cores /etc/tuned/realtime-virtual-host-variables.conf isolated_cores=1 cat /usr/lib/tuned/realtime-virtual-host/lapic_timer_adv_ns 4000 # The returned value must not be 0 cat /sys/module/kvm/parameters/lapic_timer_advance_ns 4000 # The returned value must not be 0 To validate hugepages at a host level: cat /proc/meminfo | grep -E HugePages_Total|Hugepagesize HugePages_Total: 64 Hugepagesize: 1048576 kB To validate hugepages on a per NUMA level (below example is a two NUMA compute host): cat /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages 32 cat /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages 32 crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf compute cpu_dedicated_set 1 crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf compute cpu_shared_set 0 systemctl status irqbalance ● irqbalance.service - irqbalance daemon Loaded: loaded (/usr/lib/systemd/system/irqbalance.service; enabled; vendor preset: enabled) Active: inactive (dead) since Tue 2021-03-30 13:36:31 UTC; 2s ago", "openstack server create --image <rhel> --flavor r1.small --nic net-id=<dpdk_net> test-rt", "virsh dumpxml <instance_id> | grep vcpu -A1 <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='3'/> <vcpupin vcpu='2' cpuset='5'/> <vcpupin vcpu='3' cpuset='7'/> <emulatorpin cpuset='0-1'/> <vcpusched vcpus='2-3' scheduler='fifo' priority='1'/> </cputune>", "openstack flavor set --property hw:cpu_policy=dedicated 99" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-realtime-compute_real-time-compute
Chapter 3. Creating an entitlement certificate and a client configuration RPM
Chapter 3. Creating an entitlement certificate and a client configuration RPM RHUI uses entitlement certificates to ensure that the client making requests on the repositories is authorized by the cloud provider to access those repositories. The entitlement certificate must be signed by the cloud provider's Certificate Authority (CA) Certificate. The CA Certificate is installed on the CDS as part of its configuration. 3.1. Creating a client entitlement certificate with the Red Hat Update Infrastructure Management Tool When Red Hat issues the original entitlement certificate, it grants access to the repositories you requested. When you create client entitlement certificates, you decide how to subdivide your clients and create a separate certificate for each one. Each certificate can then be used to create individual RPMs. Prerequisites The entitlement certificate must be signed by the cloud provider's CA Certificate. Procedure Navigate to the Red Hat Update Infrastructure Management Tool home screen: Press e to select create entitlement certificates and client configuration RPMs . Press e to select generate an entitlement certificate . Select which repositories to include in the entitlement certificate by typing the number of the repository at the prompt. Typing the number of a repository places an x to the name of that repository. Continue until all repositories you want to add have been checked. Important Include only repositories for a single RHEL version in a single entitlement. Adding repositories for multiple RHEL versions leads to an unusable yum configuration file. Press c at the prompt to confirm. Enter a name for the certificate. This name helps identify the certificate within the Red Hat Update Infrastructure Management Tool and generate the name of the certificate and key files. Enter a path to save the certificate. Leave the field blank to save it to the current working directory. Enter the number of days the certificate should be valid for. Leave the field blank for 365 days. The details of the repositories to be included in the certificate display. Press y at the prompt to confirm the information and create the entitlement certificate. Verification You will see a similar message if the entitlement certificate was created: 3.2. Creating a client entitlement certificate with the CLI When Red Hat issues the original entitlement certificate, it grants access to the repositories you requested. When you create client entitlement certificates, you decide how to subdivide your clients and create a separate certificate for each one. Each certificate can then be used to create individual RPMs. Prerequisites The entitlement certificate must be signed by the cloud provider's CA Certificate. Procedure Use the following command to create an entitlement certificate from the RHUI CLI: Note Use Red Hat repository labels, not IDs. To get a list of all labels, run the rhui-manager client labels command. If you include a protected custom repository in the certificate, use the repository's ID instead. Verification A similar message displays if you successfully created and entitlement certificate: 3.3. Verifying whether the client entitlement certificate is compliant with the FUTURE cryptographic policy You can verify which cryptographic policies your instance of RHUI is compliant with by checking the client entitlement certificate: Certificates that are generated by RHUI versions 3.1 to 4.0 are compliant with FIPS and DEFAULT cryptographic policies. Certificates that are generated by RHUI versions 4.1 and later are compliant with FIPS , DEFAULT and FUTURE cryptographic policy. Prerequisites Ensure that you know the location of the client entitlement certificate. The default location is /etc/pki/rhui/product/content.crt . Procedure In your client RPM, or on the machine where the RPM is installed, run the following command specifying the path where the client entitlement certificate is stored: Check the RSA key length: If the length is 2048 bits, then the client entitlement certificate is not compliant with the FUTURE policy. If the length is 4096 bits, then the client entitlement certificate is compliant with the FUTURE policy. Additional resources Creating a client entitlement certificate with the Red Hat Update Infrastructure Management Tool Creating a client entitlement certificate with the CLI 3.4. Changing the repository ID prefix in a client configuration RPM using the CLI When creating RPMs, you can either set a custom repository ID prefix or remove it entirely. By default, the prefix is rhui- . Procedure On the RHUA node, use the RHUI installer command to set or remove the prefix: Set a custom prefix: Remove the prefix entirely by using two quotation marks instead of the prefix. 3.5. Creating a client configuration RPM with the Red Hat Update Infrastructure Management Tool When Red Hat issues the original entitlement certificate, it grants access to the repositories you requested. When you create client entitlement certificates, you need to decide how to subdivide your clients and create a separate certificate for each one. You can then use each certificate to create individual RPMs for installation on the appropriate guest images. Use this procedure to create RPMs with the RHUI Management Tool. Procedure Navigate to the Red Hat Update Infrastructure Management Tool home screen: Press e to select create entitlement certificates and client configuration RPMs . From the Client Entitlement Management screen, press c to select create a client configuration RPM from an entitlement certificate . Enter the full path of a local directory to save the configuration files to: Enter the name of the RPM. Enter the version of the configuration RPM. The default version is 2.0. Enter the release of the configuration RPM. The default release is 1. Enter the full path to the entitlement certificate authorizing the client to access specific repositories. Enter the full path to the private key for the entitlement certificate. Select any unprotected custom repositories to be included in the client configuration. Press c to confirm selections or ? for more commands. Verification A similar message displays if the RPM was successfully created: 3.6. Creating a client configuration RPM with the CLI When Red Hat issues the original entitlement certificate, it grants access to the repositories you requested. When you create client entitlement certificates, you need to decide how to subdivide your clients and create a separate certificate for each one. You can then use each certificate to create individual RPMs for installation on the appropriate guest images. Use this procedure to create RPMs with the CLI. Procedure Use the following command to create an RPM with the RHUI CLI: Note When using the CLI, you can also specify the URL of the proxy server to use with RHUI repositories, or you can use _none_ (including the underscores) to override any global yum settings on a client machine. To specify a proxy, use the --proxy parameter. Verification A similar message displays if you successfully created a client configuration RPM: 3.7. Typical client RPM workflow As a CCSP, you can offer various versions of Red Hat Enterprise Linux and a variety of layered products available on top of it. In addition to the Red Hat repositories that provide this content, you will need custom repositories to provide updates to client configuration RPMs for these Red Hat Enterprise Linux versions and layered products. You must create a custom repository for each Red Hat Enterprise Linux version and each layered product sold separately. For example, you will need separate custom repositories for the base Red Hat Enterprise Linux 8 offering and for SAP on Red Hat Enterprise Linux. These custom repositories will store the corresponding client configuration RPMs. Whenever you update these RPMs-for example, to add a new repository or to update an expiring certificate-you will upload newer versions to the corresponding custom repositories. It is good practice to sign all RPMs with a GPG key, ensuring that users are installing official packages from you that have not been tampered with. However, signing packages is outside the scope of RHUI, so you need to sign your client configuration RPMs using tools available in your company. To create the custom repository, you only need the public GPG key on the RHUA to configure it for use with the custom repository. Note that rhui-manager will automatically include the key in the client configuration RPM and use it for the custom repository in dnf configuration. Procedure In the following example, you will create a custom repository for the client configuration RPM for base Red Hat Enterprise Linux 8 on the x86_64 architecture: You can use a different repository ID and display name if desired, and ensure you specify the actual GPG key file. Add the relevant Red Hat repositories. The following YAML file contains the typical set of repositories for base Red Hat Enterprise Linux 8 on the x86_64 architecture, using unversioned repositories: To add and synchronize all these repositories using the YAML file above, run the following command: Create an entitlement certificate. You will need a list of repository labels that are to be allowed in the certificate. Repository labels are often identical to repository IDs, except when the repository ID contains a specific Red Hat Enterprise Linux minor version, in which case the label does not contain the minor version but only the major version. In the case of base Red Hat Enterprise Linux repositories, the IDs are identical, so you can extract them from the YAML file above, using the following Python code: Copy the output to the clipboard and store it as an environment variable; for example, USDlabels: In addition to the Red Hat Enterprise Linux repository labels, you also need to add the custom repository to the comma-separated list of labels when creating the entitlement certificate. Run the following command to create the entitlement certificate allowing access to both the Red Hat Enterprise Linux repositories and the custom repository: If your company's policy allows certificates to be valid for only one year, two years, etc., change the value of the --days argument accordingly. This command creates the files /root/rhel-8-x86_64.crt and /root/rhel-8-x86_64.key . You will need them in the step. Create a client configuration RPM: Use an RPM name or version of your choice. With the values above, the command creates the RPM and prints its location, which is: /tmp/rhui-client-rhel-8-x86_64-1.0/build/RPMS/noarch/rhui-client-rhel-8-x86_64-1.0-1.noarch.rpm Transfer this RPM from the RHUA to your system and sign it with the appropriate GPG key-the private key that corresponds to the public key that you used as the --gpg_public_keys parameter when you created the custom repository. You can then, for example, have the signed RPM preinstalled on Red Hat Enterprise Linux 8 x86_64 images in your cloud environment. You also need to transfer the signed RPM back to the RHUA and upload it to the custom repository for Red Hat Enterprise Linux 8 on x86_64: Verification Check the contents of the custom repository: This command is supposed to print the RPM file that you have uploaded. Once you have configured your CDS and HAProxy nodes, which is described later in this guide, you can also install the client configuration RPM on a test VM and verify access to all the relevant repositories by running the following command on the test VM: This command is supposed to print the configured Red Hat Enterprise Linux 8 repositories and the custom repository for client configuration RPMs. Updating the client configuration RPM When it is necessary to rebuild the client configuration RPM, increase the version number. If you used 1.0 in the invocation, use e.g. 2.0 now, and keep the rest of the parameters: Then, again, sign the newer RPM, transfer it to the RHUA, and upload it to the custom repository: Client VMs on which the version of the RPM is installed will now be able to update to the newer version. Note that it may be necessary to clean the dnf cache on the client VM to make dnf reload the repodata, which was updated when the newer RPM was uploaded. Note Do not combine x86_64 and ARM64 repositories in one entitlement certificate. The client configuration RPM created by rhui-manager using such a certificate would provide access to both architectures on the target client VM, which might cause conflicts. You would have to modify the rh-cloud.repo file and rebuild the RPM outside of rhui-manager . Note that, as long as you used --dir /tmp when creating the client configuration RPM, the artifacts are now stored in /tmp/rhui-client-rhel-8-x86_64-1.0/build/ . For detailed information about rebuilding RPMs, see Packaging and distributing software in the Red Hat Enterprise Linux documentation. Note It is currently impossible to make rhui-manager create the rh-cloud.repo file with certain repositories-for example, -debug and -source repositories-disabled by default. You would have to modify the rh-cloud.repo file and rebuild the RPM outside of rhui-manager . This issue is tracked in BZ#1772156 . Additional resources See also How to sign rpms with GPG for general information about package signing using basic tools.
[ "rhui-manager", "Name of the certificate. This will be used as the name of the certificate file (name.crt) and its associated private key (name.key). Choose something that will help identify the products contained with it.", "Repositories to be included in the entitlement certificate: Red Hat Repositories Red Hat Enterprise Linux 8 for ARM 64 - AppStream (Debug RPMs) from RHUI Red Hat Enterprise Linux 8 for ARM 64 - AppStream (RPMs) from RHUI Red Hat Enterprise Linux 8 for ARM 64 - AppStream (Source RPMs) from RHUI Proceed? (y/n)", "..........................+++++ ....+++++ Entitlement certificate created at ./rhel8-for-rhui4.crt ------------------------------------------------------------------------------", "rhui-manager client cert --repo_label rhel-8-for-x86_64-appstream-eus-rhui-source-rpms --name rhuiclientexample --days 365 --dir /root/clientcert .............................................+++++ ...............................................................................+++++ Entitlement certificate created at /root/clientcert/rhuiclientexample.crt", "Entitlement certificate created at /root/clientcert/rhuiclientexample.crt", "openssl x509 -noout -text -in /etc/pki/rhui/product/content.crt | grep bit", "rhui-installer --rerun --client-repo-prefix CUSTOM_PREFIX", "rhui-installer --rerun --client-repo-prefix \"\"", "rhui-manager", "Full path to local directory in which the client configuration files generated by this tool should be stored (if this directory does not exist, it will be created):", "Successfully created client configuration RPM. Location: /tmp/clientrpmtest-2.0/build/RPMS/noarch/clientrpmtest-2.0-1.noarch.rpm", "rhui-manager client rpm --entitlement_cert /root/clientcert/rhuiclientexample.crt --private_key /root/clientcert/rhuiclientexample.key --rpm_name clientrpmtest --dir /tmp --unprotected_repos unprotected_repo1 Successfully created client configuration RPM. Location: /tmp/clientrpmtest-2.0/build/RPMS/noarch/clientrpmtest-2.0-1.noarch.rpm", "Successfully created client configuration RPM. Location: /tmp/clientrpmtest-2.0/build/RPMS/noarch/clientrpmtest-2.0-1.noarch.rpm", "rhui-manager repo create_custom --protected --repo_id client-config-rhel-8-x86_64 --display_name \"RHUI Client Configuration for RHEL 8 on x86_64\" --gpg_public_keys /root/RPM-GPG-KEY-my-cloud", "cat rhel-8-x86_64.yaml name: Red Hat Enterprise Linux 8 on x86_64 repo_ids: - codeready-builder-for-rhel-8-x86_64-rhui-debug-rpms-8 - codeready-builder-for-rhel-8-x86_64-rhui-rpms-8 - codeready-builder-for-rhel-8-x86_64-rhui-source-rpms-8 - rhel-8-for-x86_64-appstream-rhui-debug-rpms-8 - rhel-8-for-x86_64-appstream-rhui-rpms-8 - rhel-8-for-x86_64-appstream-rhui-source-rpms-8 - rhel-8-for-x86_64-baseos-rhui-debug-rpms-8 - rhel-8-for-x86_64-baseos-rhui-rpms-8 - rhel-8-for-x86_64-baseos-rhui-source-rpms-8 - rhel-8-for-x86_64-supplementary-rhui-debug-rpms-8 - rhel-8-for-x86_64-supplementary-rhui-rpms-8 - rhel-8-for-x86_64-supplementary-rhui-source-rpms-8", "rhui-manager repo add_by_file --file rhel-8-x86_64.yaml --sync_now", "import yaml with open(\"rhel-8-x86_64.yaml\") as repoyaml: repodata = yaml.safe_load(repoyaml) print(\",\".join(repodata[\"repo_ids\"]))", "labels=<paste the contents of the clipboard here>", "rhui-manager client cert --name rhel-8-x86_64 --dir /root --days 3650 --repo_label USDlabels,client-config-rhel-8-x86_64", "rhui-manager client rpm --dir /tmp --rpm_name rhui-client-rhel-8-x86_64 --rpm_version 1.0 --entitlement_cert /root/rhel-8-x86_64.crt --private_key /root/rhel-8-x86_64.key", "rhui-manager packages upload --repo_id client-config-rhel-8-x86_64 --packages /root/signed/rhui-client-rhel-8-x86_64-1.0-1.noarch.rpm", "rhui-manager packages list --repo_id client-config-rhel-8-x86_64", "yum -v repolist", "rhui-manager client rpm --dir /tmp --rpm_name rhui-client-rhel-8-x86_64 --rpm_version 2.0", "rhui-manager packages upload --repo_id client-config-rhel-8-x86_64 --packages /root/signed/rhui-client-rhel-8-x86_64-2.0-1.noarch.rpm" ]
https://docs.redhat.com/en/documentation/red_hat_update_infrastructure/4/html/configuring_and_managing_red_hat_update_infrastructure/assembly_cmg-creating-client-ent-cert-config-rpm_configuring-and-managing-red-hat-update-infrastructure
Chapter 8. ServiceNow Custom actions in Red Hat Developer Hub
Chapter 8. ServiceNow Custom actions in Red Hat Developer Hub Important These features are for Technology Preview only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features, see Technology Preview Features Scope . In Red Hat Developer Hub, you can access ServiceNow custom actions (custom actions) for fetching and registering resources in the catalog. The custom actions in Developer Hub enable you to facilitate and automate the management of records. Using the custom actions, you can perform the following actions: Create, update, or delete a record Retrieve information about a single record or multiple records 8.1. Enabling ServiceNow custom actions plugin in Red Hat Developer Hub In Red Hat Developer Hub, the ServiceNow custom actions are provided as a pre-loaded plugin, which is disabled by default. You can enable the custom actions plugin using the following procedure. Prerequisites Red Hat Developer Hub is installed and running. For more information about installing the Developer Hub, see Installing Red Hat Developer Hub on OpenShift Container Platform with the Helm chart . You have created a project in the Developer Hub. Procedure To activate the custom actions plugin, add a package with plugin name and update the disabled field in your Helm Chart as follows: global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/janus-idp-backstage-scaffolder-backend-module-servicenow-dynamic disabled: false Note The default configuration for a plugin is extracted from the dynamic-plugins.default.yaml file, however, you can use a pluginConfig entry to override the default configuration. Set the following variables in the Helm Chart to access the custom actions: servicenow: # The base url of the ServiceNow instance. baseUrl: USD{SERVICENOW_BASE_URL} # The username to use for authentication. username: USD{SERVICENOW_USERNAME} # The password to use for authentication. password: USD{SERVICENOW_PASSWORD} 8.2. Supported ServiceNow custom actions in Red Hat Developer Hub The ServiceNow custom actions enable you to manage records in the Red Hat Developer Hub. The custom actions support the following HTTP methods for API requests: GET : Retrieves specified information from a specified resource endpoint POST : Creates or updates a resource PUT : Modify a resource PATCH : Updates a resource DELETE : Deletes a resource 8.2.1. ServiceNow custom actions [GET] servicenow:now:table:retrieveRecord Retrieves information of a specified record from a table in the Developer Hub. Table 8.1. Input parameters Name Type Requirement Description tableName string Required Name of the table to retrieve the record from sysId string Required Unique identifier of the record to retrieve sysparmDisplayValue enum("true", "false", "all") Optional Returns field display values such as true , actual values as false , or both. The default value is false . sysparmExcludeReferenceLink boolean Optional Set as true to exclude Table API links for reference fields. The default value is false . sysparmFields string[] Optional Array of fields to return in the response sysparmView string Optional Renders the response according to the specified UI view. You can override this parameter using sysparm_fields . sysparmQueryNoDomain boolean Optional Set as true to access data across domains if authorized. The default value is false . Table 8.2. Output parameters Name Type Description result Record<PropertyKey, unknown> The response body of the request [GET] servicenow:now:table:retrieveRecords Retrieves information about multiple records from a table in the Developer Hub. Table 8.3. Input parameters Name Type Requirement Description tableName string Required Name of the table to retrieve the records from sysparamQuery string Optional Encoded query string used to filter the results sysparmDisplayValue enum("true", "false", "all") Optional Returns field display values such as true , actual values as false , or both. The default value is false . sysparmExcludeReferenceLink boolean Optional Set as true to exclude Table API links for reference fields. The default value is false . sysparmSuppressPaginationHeader boolean Optional Set as true to suppress pagination header. The default value is false . sysparmFields string[] Optional Array of fields to return in the response sysparmLimit int Optional Maximum number of results returned per page. The default value is 10,000 . sysparmView string Optional Renders the response according to the specified UI view. You can override this parameter using sysparm_fields . sysparmQueryCategory string Optional Name of the query category to use for queries sysparmQueryNoDomain boolean Optional Set as true to access data across domains if authorized. The default value is false . sysparmNoCount boolean Optional Does not execute a select count(*) on the table. The default value is false . Table 8.4. Output parameters Name Type Description result Record<PropertyKey, unknown> The response body of the request [POST] servicenow:now:table:createRecord Creates a record in a table in the Developer Hub. Table 8.5. Input parameters Name Type Requirement Description tableName string Required Name of the table to save the record in requestBody Record<PropertyKey, unknown> Optional Field name and associated value for each parameter to define in the specified record sysparmDisplayValue enum("true", "false", "all") Optional Returns field display values such as true , actual values as false , or both. The default value is false . sysparmExcludeReferenceLink boolean Optional Set as true to exclude Table API links for reference fields. The default value is false . sysparmFields string[] Optional Array of fields to return in the response sysparmInputDisplayValue boolean Optional Set field values using their display value such as true or actual value as false . The default value is false . sysparmSuppressAutoSysField boolean Optional Set as true to suppress auto-generation of system fields. The default value is false . sysparmView string Optional Renders the response according to the specified UI view. You can override this parameter using sysparm_fields . Table 8.6. Output parameters Name Type Description result Record<PropertyKey, unknown> The response body of the request [PUT] servicenow:now:table:modifyRecord Modifies a record in a table in the Developer Hub. Table 8.7. Input parameters Name Type Requirement Description tableName string Required Name of the table to modify the record from sysId string Required Unique identifier of the record to modify requestBody Record<PropertyKey, unknown> Optional Field name and associated value for each parameter to define in the specified record sysparmDisplayValue enum("true", "false", "all") Optional Returns field display values such as true , actual values as false , or both. The default value is false . sysparmExcludeReferenceLink boolean Optional Set as true to exclude Table API links for reference fields. The default value is false . sysparmFields string[] Optional Array of fields to return in the response sysparmInputDisplayValue boolean Optional Set field values using their display value such as true or actual value as false . The default value is false . sysparmSuppressAutoSysField boolean Optional Set as true to suppress auto-generation of system fields. The default value is false . sysparmView string Optional Renders the response according to the specified UI view. You can override this parameter using sysparm_fields . sysparmQueryNoDomain boolean Optional Set as true to access data across domains if authorized. The default value is false . Table 8.8. Output parameters Name Type Description result Record<PropertyKey, unknown> The response body of the request [PATCH] servicenow:now:table:updateRecord Updates a record in a table in the Developer Hub. Table 8.9. Input parameters Name Type Requirement Description tableName string Required Name of the table to update the record in sysId string Required Unique identifier of the record to update requestBody Record<PropertyKey, unknown> Optional Field name and associated value for each parameter to define in the specified record sysparmDisplayValue enum("true", "false", "all") Optional Returns field display values such as true , actual values as false , or both. The default value is false . sysparmExcludeReferenceLink boolean Optional Set as true to exclude Table API links for reference fields. The default value is false . sysparmFields string[] Optional Array of fields to return in the response sysparmInputDisplayValue boolean Optional Set field values using their display value such as true or actual value as false . The default value is false . sysparmSuppressAutoSysField boolean Optional Set as true to suppress auto-generation of system fields. The default value is false . sysparmView string Optional Renders the response according to the specified UI view. You can override this parameter using sysparm_fields . sysparmQueryNoDomain boolean Optional Set as true to access data across domains if authorized. The default value is false . Table 8.10. Output parameters Name Type Description result Record<PropertyKey, unknown> The response body of the request [DELETE] servicenow:now:table:deleteRecord Deletes a record from a table in the Developer Hub. Table 8.11. Input parameters Name Type Requirement Description tableName string Required Name of the table to delete the record from sysId string Required Unique identifier of the record to delete sysparmQueryNoDomain boolean Optional Set as true to access data across domains if authorized. The default value is false .
[ "global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/janus-idp-backstage-scaffolder-backend-module-servicenow-dynamic disabled: false", "servicenow: # The base url of the ServiceNow instance. baseUrl: USD{SERVICENOW_BASE_URL} # The username to use for authentication. username: USD{SERVICENOW_USERNAME} # The password to use for authentication. password: USD{SERVICENOW_PASSWORD}" ]
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/getting_started_with_red_hat_developer_hub/con-servicenow-custom-actions_assembly-customize-rhdh-theme
Chapter 3. Installing SAP application server instances
Chapter 3. Installing SAP application server instances 3.1. Configuration options used in this document Below are the configuration options that will be used for instances in this document. Please adapt these options according to your local requirements. For the HA cluster nodes and the (A)SCS and ERS instances managed by the HA cluster, the following values are used: 1st HA cluster node name: node1 2nd HA cluster node name: node2 SID: S4H ASCS Instance number: 20 ASCS virtual hostname: s4ascs ASCS virtual IP address: 192.168.200.101 ERS Instance number: 29 ERS virtual hostname: s4ers ASCS virtual IP address: 192.168.200.102 For the optional primary application server (PAS) and additional application server (AAS) instances, the following values are used: PAS Instance number: 21 PAS virtual hostname: s4pas PAS virtual IP address: 192.168.200.103 AAS Instance number: 22 AAS virtual hostname: s4aas AAS virtual IP address: 192.168.200.104 3.2. Preparing the cluster nodes for installation of the SAP instances Before starting the installation, ensure that: RHEL 9 is installed and configured on all HA cluster nodes according to the recommendations from SAP and Red Hat for running SAP application server instances on RHEL 9. The RHEL for SAP Applications or RHEL for SAP Solutions subscriptions are activated, and the required repositories are enabled on all HA cluster nodes, as documented in RHEL for SAP Subscriptions and Repositories . Shared storage and instance directories are present at the correct mount points. The virtual hostnames and IP addresses used by the SAP instances can be resolved in both directions, and the virtual IP addresses must be accessible. The SAP installation media are accessible on each HA cluster node where a SAP instance will be installed. These setup steps can be partially automated using Ansible and rhel-system-roles-sap system roles . For more information on this, please check out Red Hat Enterprise Linux System Roles for SAP . 3.3. Installing SAP instances Using software provisioning manager (SWPM), install instances in the following order: (A)SCS instance ERS instance DB instance PAS instance AAS instances The following sections just provide some specific recommendations that should be followed when installing SAP instances that will be managed by the HA cluster setup described in this document. Please check the official SAP installation guides for detailed instructions on how to install SAP NetWeaver or S/4HANA application server instances. 3.3.1. Installing (A)SCS on node1 The local directories and mount points required by the SAP instance must be created on the HA cluster node where the (A)SCS instance will be installed: /sapmnt/ /usr/sap/ /usr/sap/SYS/ /usr/sap/trans/ /usr/sap/S4H/ASCS20/ The shared directories and the instance directory must be manually mounted before starting the installation. Also, the virtual IP address for the (A)SCS instance must be enabled on node 1, and it must have been verified that the virtual hostname for the ERS instance resolves to the virtual IP address. When running the SAP installer, please make sure to specify the virtual hostname that should be used for the (A)SCS instance: [root@node1]# ./sapinst SAPINST_USE_HOSTNAME=s4ascs Select the High-Availability System option for the installation of the (A)SCS instance: 3.3.2. Installing ERS on node2 The local directories and mount points required by the SAP instance must be created on the HA cluster node where the ERS instance will be installed: /sapmnt/ /usr/sap/ /usr/sap/SYS/ /usr/sap/trans/ /usr/sap/S4H/ERS29 The shared directories and the instance directory must be manually mounted before starting the installation. Also, the virtual IP address for the ERS instance must be enabled on node 2, and it must have been verified that the virtual hostname for the ERS instance resolves to the virtual IP address. Make sure to specify the virtual hostname for the ERS instance when starting the installation: [root@node2]# ./sapinst SAPINST_USE_HOSTNAME=s4ers Select the High-Availability System option for the installation of the ERS instance: 3.3.3. Installing primary/additional application server instances The local directories and mount points required by the SAP instance must be created on the HA cluster node where the primary or additional application server instance(s) will be installed: /sapmnt/ /usr/sap/ /usr/sap/SYS/ /usr/sap/trans/ /usr/sap/S4H/ /usr/sap/S4H/D<Ins#> The shared directories and the instance directory must be manually mounted before starting the installation. Also, the virtual IP address for the application server instance must be enabled on the HA cluster node, and it must have been verified that the virtual hostname for the application server instance resolves to the virtual IP address. Specify the virtual hostname for the instance when starting the installer: [root@node<X>]# ./sapinst SAPINST_USE_HOSTNAME=<virtual hostname of instance> Select the High-Availability System option for the installation of the application server instance. 3.4. Post Installation 3.4.1. (A)SCS profile modification The (A)SCS instance profile has to be modified to prevent the automatic restart of the enqueue server process by the sapstartsrv process of the instance, since the instance will be managed by the cluster. To modify the (A)SCS instance profile, run the following command: [root@node1]# sed -i -e 's/Restart_Program_01/Start_Program_01/' /sapmnt/S4H/profile/S4H_ASCS20_s4ascs 3.4.2. ERS profile modification The ERS instance profile has to be modified to prevent the automatic restart of the enqueue replication server process by the sapstartsrv of the instance since the ERS instance will be managed by the cluster. To modify the ERS instance profile, run the following command: [root@node2]# sed -i -e 's/Restart_Program_00/Start_Program_00/' /sapmnt/S4H/profile/S4H_ERS29_s4ers 3.4.3. Updating the /usr/sap/sapservices file To prevent the SAP instances that will be managed by the HA cluster to be started outside of the control of the HA cluster, make sure the following lines are commented out in the /usr/sap/sapservices file on all cluster nodes: #LD_LIBRARY_PATH=/usr/sap/S4H/ERS29/exe:USDLD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/S4H/ERS29/exe/sapstartsrv pf=/usr/sap/S4H/SYS/profile/S4H_ERS29_s4ers -D -u s4hadm #LD_LIBRARY_PATH=/usr/sap/S4H/ASCS20/exe:USDLD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/S4H/ASCS20/exe/sapstartsrv pf=/usr/sap/S4H/SYS/profile/S4H_ASCS20_s4ascs -D -u s4hadm #LD_LIBRARY_PATH=/usr/sap/S4H/D21/exe:USDLD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/S4H/D21/exe/sapstartsrv pf=/usr/sap/S4H/SYS/profile/S4H_D21_s4hpas -D -u s4hadm #LD_LIBRARY_PATH=/usr/sap/S4H/D22/exe:USDLD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/S4H/D22/exe/sapstartsrv pf=/usr/sap/S4H/SYS/profile/S4H_D22_s4haas -D -u s4hadm 3.4.4. Creating mount points for the instance specific directories on the failover node The mount points where the instance-specific directories will be mounted have to be created and the user and group ownership must be set to the <sid>adm user and the sapsys group on all HA cluster nodes: [root@node1]# mkdir /usr/sap/S4H/ERS29/ [root@node1]# chown s4hadm:sapsys /usr/sap/S4H/ERS29/ [root@node2]# mkdir /usr/sap/S4H/ASCS20 [root@node2]# chown s4hadm:sapsys /usr/sap/S4H/ASCS20 [root@node<x>]# mkdir /usr/sap/S4H/D<Ins#> [root@node<x>]# chown s4hadm:sapsys /usr/sap/S4H/D<Ins#> 3.4.5. Verifying that the SAP instances can be started and stopped on all cluster nodes Stop the (A)SCS and ERS instances using sapcontrol` , unmount the instance specific directories and then mount them on the other node: /usr/sap/S4H/ASCS20/ /usr/sap/S4H/ERS29/ /usr/sap/S4H/D<Ins#>/ Verify that manual starting and stopping of all SAP instances using sapcontrol works on all HA cluster nodes and that the SAP instances are running correctly using the tools provided by SAP. 3.4.6. Verifying that the correct version of SAP Host Agent is installed on all HA cluster nodes Run the following command on each cluster node to verify that SAP Host Agent has the same version and meets the minimum version requirement: [root@node<x>]# /usr/sap/hostctrl/exe/saphostexec -version Please check SAP Note 1031096-Installing Package SAPHOSTAGENT in case SAP Host Agent needs to be updated. 3.4.7. Installing permanent SAP license keys To ensure that the SAP instances continue to run after a failover, it might be necessary to install several SAP license keys based on the hardware key of each cluster node. Please see SAP Note 1178686 - Linux: Alternative method to generate a SAP hardware key for more information. 3.4.8. Additional changes required when using systemd enabled SAP instances If the SAP instances that will be managed by the cluster are systemd enabled , additional configuration changes are required to ensure that systemd does not interfere with the management of the SAP instances by the HA cluster. Please check out section 2. Red Hat HA Solutions for SAP in The Systemd-Based SAP Startup Framework for information.
[ "1st HA cluster node name: node1 2nd HA cluster node name: node2 SID: S4H ASCS Instance number: 20 ASCS virtual hostname: s4ascs ASCS virtual IP address: 192.168.200.101 ERS Instance number: 29 ERS virtual hostname: s4ers ASCS virtual IP address: 192.168.200.102", "PAS Instance number: 21 PAS virtual hostname: s4pas PAS virtual IP address: 192.168.200.103 AAS Instance number: 22 AAS virtual hostname: s4aas AAS virtual IP address: 192.168.200.104", "/sapmnt/ /usr/sap/ /usr/sap/SYS/ /usr/sap/trans/ /usr/sap/S4H/ASCS20/", "./sapinst SAPINST_USE_HOSTNAME=s4ascs", "/sapmnt/ /usr/sap/ /usr/sap/SYS/ /usr/sap/trans/ /usr/sap/S4H/ERS29", "./sapinst SAPINST_USE_HOSTNAME=s4ers", "/sapmnt/ /usr/sap/ /usr/sap/SYS/ /usr/sap/trans/ /usr/sap/S4H/ /usr/sap/S4H/D<Ins#>", "./sapinst SAPINST_USE_HOSTNAME=<virtual hostname of instance>", "sed -i -e 's/Restart_Program_01/Start_Program_01/' /sapmnt/S4H/profile/S4H_ASCS20_s4ascs", "sed -i -e 's/Restart_Program_00/Start_Program_00/' /sapmnt/S4H/profile/S4H_ERS29_s4ers", "#LD_LIBRARY_PATH=/usr/sap/S4H/ERS29/exe:USDLD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/S4H/ERS29/exe/sapstartsrv pf=/usr/sap/S4H/SYS/profile/S4H_ERS29_s4ers -D -u s4hadm #LD_LIBRARY_PATH=/usr/sap/S4H/ASCS20/exe:USDLD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/S4H/ASCS20/exe/sapstartsrv pf=/usr/sap/S4H/SYS/profile/S4H_ASCS20_s4ascs -D -u s4hadm #LD_LIBRARY_PATH=/usr/sap/S4H/D21/exe:USDLD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/S4H/D21/exe/sapstartsrv pf=/usr/sap/S4H/SYS/profile/S4H_D21_s4hpas -D -u s4hadm #LD_LIBRARY_PATH=/usr/sap/S4H/D22/exe:USDLD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/S4H/D22/exe/sapstartsrv pf=/usr/sap/S4H/SYS/profile/S4H_D22_s4haas -D -u s4hadm", "mkdir /usr/sap/S4H/ERS29/ chown s4hadm:sapsys /usr/sap/S4H/ERS29/ mkdir /usr/sap/S4H/ASCS20 chown s4hadm:sapsys /usr/sap/S4H/ASCS20 mkdir /usr/sap/S4H/D<Ins#> chown s4hadm:sapsys /usr/sap/S4H/D<Ins#>", "/usr/sap/S4H/ASCS20/ /usr/sap/S4H/ERS29/ /usr/sap/S4H/D<Ins#>/", "/usr/sap/hostctrl/exe/saphostexec -version" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/configuring_ha_clusters_to_manage_sap_netweaver_or_sap_s4hana_application_server_instances_using_the_rhel_ha_add-on/asmb_install_server_instance_configuring-clusters-to-manage
Chapter 2. Architectures
Chapter 2. Architectures Red Hat Enterprise Linux 8.1 is distributed with the kernel version 4.18.0-147, which provides support for the following architectures: AMD and Intel 64-bit architectures The 64-bit ARM architecture IBM Power Systems, Little Endian 64-bit IBM Z Make sure you purchase the appropriate subscription for each architecture. For more information, see Get Started with Red Hat Enterprise Linux - additional architectures . For a list of available subscriptions, see Subscription Utilization on the Customer Portal.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.1_release_notes/architectures
Chapter 15. Red Hat Quay quota management and enforcement overview
Chapter 15. Red Hat Quay quota management and enforcement overview With Red Hat Quay, users have the ability to report storage consumption and to contain registry growth by establishing configured storage quota limits. On-premise Red Hat Quay users are now equipped with the following capabilities to manage the capacity limits of their environment: Quota reporting: With this feature, a superuser can track the storage consumption of all their organizations. Additionally, users can track the storage consumption of their assigned organization. Quota management: With this feature, a superuser can define soft and hard checks for Red Hat Quay users. Soft checks tell users if the storage consumption of an organization reaches their configured threshold. Hard checks prevent users from pushing to the registry when storage consumption reaches the configured limit. Together, these features allow service owners of a Red Hat Quay registry to define service level agreements and support a healthy resource budget. 15.1. Quota management limitations Quota management helps organizations to maintain resource consumption. One limitation of quota management is that calculating resource consumption on push results in the calculation becoming part of the push's critical path. Without this, usage data might drift. The maximum storage quota size is dependent on the selected database: Table 15.1. Worker count environment variables Variable Description Postgres 8388608 TB MySQL 8388608 TB SQL Server 16777216 TB 15.2. Quota management for Red Hat Quay 3.9 If you are upgrading to Red Hat Quay 3.9, you must reconfigure the quota management feature. This is because with Red Hat Quay 3.9, calculation is done differently. As a result, totals prior to Red Hat Quay 3.9 are no longer valid. There are two methods for configuring quota management in Red Hat Quay 3.9, which are detailed in the following sections. Note This is a one time calculation that must be done after you have upgraded to Red Hat Quay 3.9. Superuser privileges are required to create, update and delete quotas. While quotas can be set for users as well as organizations, you cannot reconfigure the user quota using the Red Hat Quay UI and you must use the API instead. 15.2.1. Option A: Configuring quota management for Red Hat Quay 3.9 by adjusting the QUOTA_TOTAL_DELAY feature flag Use the following procedure to recalculate Red Hat Quay 3.9 quota management by adjusting the QUOTA_TOTAL_DELAY feature flag. Note With this recalculation option, the totals appear as 0.00 KB until the allotted time designated for QUOTA_TOTAL_DELAY . Prerequisites You have upgraded to Red Hat Quay 3.9. You are logged into Red Hat Quay 3.9 as a superuser. Procedure Deploy Red Hat Quay 3.9 with the following config.yaml settings: FEATURE_QUOTA_MANAGEMENT: true FEATURE_GARBAGE_COLLECTION: true PERMANENTLY_DELETE_TAGS: true QUOTA_TOTAL_DELAY_SECONDS: 1800 1 RESET_CHILD_MANIFEST_EXPIRATION: true 1 The QUOTA_TOTAL_DELAY_SECONDS flag defaults to 1800 seconds, or 30 minutes. This allows Red Hat Quay 3.9 to successfully deploy before the quota management feature begins calculating storage consumption for every blob that has been pushed. Setting this flag to a lower number might result in miscalculation; it must be set to a number that is greater than the time it takes your Red Hat Quay deployment to start. 1800 is the recommended setting, however larger deployments that take longer than 30 minutes to start might require a longer duration than 1800 . Navigate to the Red Hat Quay UI and click the name of your Organization. The Total Quota Consumed should read 0.00 KB . Additionally, the Backfill Queued indicator should be present. After the allotted time, for example, 30 minutes, refresh your Red Hat Quay deployment page and return to your Organization. Now, the Total Quota Consumed should be present. 15.2.2. Option B: Configuring quota management for Red Hat Quay 3.9 by setting QUOTA_TOTAL_DELAY_SECONDS to 0 Use the following procedure to recalculate Red Hat Quay 3.9 quota management by setting QUOTA_TOTAL_DELAY_SECONDS to 0 . Note Using this option prevents the possibility of miscalculations, however is more time intensive. Use the following procedure for when your Red Hat Quay deployment swaps the FEATURE_QUOTA_MANAGEMENT parameter from false to true . Most users will find xref: Prerequisites You have upgraded to Red Hat Quay 3.9. You are logged into Red Hat Quay 3.9 as a superuser. Procedure Deploy Red Hat Quay 3.9 with the following config.yaml settings: FEATURE_GARBAGE_COLLECTION: true FEATURE_QUOTA_MANAGEMENT: true QUOTA_BACKFILL: false QUOTA_TOTAL_DELAY_SECONDS: 0 PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true Navigate to the Red Hat Quay UI and click the name of your Organization. The Total Quota Consumed should read 0.00 KB . Redeploy Red Hat Quay and set the QUOTA_BACKFILL flag set to true . For example: QUOTA_BACKFILL: true Note If you choose to disable quota management after it has calculated totals, Red Hat Quay marks those totals as stale. If you re-enable the quota management feature again in the future, those namespaces and repositories are recalculated by the backfill worker. 15.3. Testing quota management for Red Hat Quay 3.9 With quota management configured for Red Hat Quay 3.9, duplicative images are now only counted once towards the repository total. Use the following procedure to test that a duplicative image is only counted once toward the repository total. Prerequisites You have configured quota management for Red Hat Quay 3.9. Procedure Pull a sample image, for example, ubuntu:18.04 , by entering the following command: USD podman pull ubuntu:18.04 Tag the same image twice by entering the following command: USD podman tag docker.io/library/ubuntu:18.04 quay-server.example.com/quota-test/ubuntu:tag1 USD podman tag docker.io/library/ubuntu:18.04 quay-server.example.com/quota-test/ubuntu:tag2 Push the sample image to your organization by entering the following commands: USD podman push --tls-verify=false quay-server.example.com/quota-test/ubuntu:tag1 USD podman push --tls-verify=false quay-server.example.com/quota-test/ubuntu:tag2 On the Red Hat Quay UI, navigate to Organization and click the Repository Name , for example, quota-test/ubuntu . Then, click Tags . There should be two repository tags, tag1 and tag2 , each with the same manifest. For example: However, by clicking on the Organization link, we can see that the Total Quota Consumed is 27.94 MB , meaning that the Ubuntu image has only been accounted for once: If you delete one of the Ubuntu tags, the Total Quota Consumed remains the same. Note If you have configured the Red Hat Quay time machine to be longer than 0 seconds, subtraction will not happen until those tags pass the time machine window. If you want to expedite permanent deletion, see Permanently deleting an image tag in Red Hat Quay 3.9. 15.4. Setting default quota To specify a system-wide default storage quota that is applied to every organization and user, you can use the DEFAULT_SYSTEM_REJECT_QUOTA_BYTES configuration flag. If you configure a specific quota for an organization or user, and then delete that quota, the system-wide default quota will apply if one has been set. Similarly, if you have configured a specific quota for an organization or user, and then modify the system-wide default quota, the updated system-wide default will override any specific settings. For more information about the DEFAULT_SYSTEM_REJECT_QUOTA_BYTES flag, see link: 15.5. Establishing quota in Red Hat Quay UI The following procedure describes how you can report storage consumption and establish storage quota limits. Prerequisites A Red Hat Quay registry. A superuser account. Enough storage to meet the demands of quota limitations. Procedure Create a new organization or choose an existing one. Initially, no quota is configured, as can be seen on the Organization Settings tab: Log in to the registry as a superuser and navigate to the Manage Organizations tab on the Super User Admin Panel . Click the Options icon of the organization for which you want to create storage quota limits: Click Configure Quota and enter the initial quota, for example, 10 MB . Then click Apply and Close : Check that the quota consumed shows 0 of 10 MB on the Manage Organizations tab of the superuser panel: The consumed quota information is also available directly on the Organization page: Initial consumed quota To increase the quota to 100MB, navigate to the Manage Organizations tab on the superuser panel. Click the Options icon and select Configure Quota , setting the quota to 100 MB. Click Apply and then Close : Pull a sample image by entering the following command: USD podman pull ubuntu:18.04 Tag the sample image by entering the following command: USD podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 Push the sample image to the organization by entering the following command: USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 On the superuser panel, the quota consumed per organization is displayed: The Organization page shows the total proportion of the quota used by the image: Total Quota Consumed for first image Pull a second sample image by entering the following command: USD podman pull nginx Tag the second image by entering the following command: USD podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx Push the second image to the organization by entering the following command: USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx The Organization page shows the total proportion of the quota used by each repository in that organization: Total Quota Consumed for each repository Create reject and warning limits: From the superuser panel, navigate to the Manage Organizations tab. Click the Options icon for the organization and select Configure Quota . In the Quota Policy section, with the Action type set to Reject , set the Quota Threshold to 80 and click Add Limit : To create a warning limit, select Warning as the Action type, set the Quota Threshold to 70 and click Add Limit : Click Close on the quota popup. The limits are viewable, but not editable, on the Settings tab of the Organization page: Push an image where the reject limit is exceeded: Because the reject limit (80%) has been set to below the current repository size (~83%), the pushed image is rejected automatically. Sample image push USD podman pull ubuntu:20.04 USD podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 Sample output when quota exceeded Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace When limits are exceeded, notifications are displayed in the UI: Quota notifications 15.6. Establishing quota for an organization with the Red Hat Quay API When an organization is first created, it does not have an established quota. You can use the API to check, create, change, or delete quota limitations for an organization. Prerequisites You have generated an OAuth access token. Procedure To set a quota for an organization, you can use the POST /api/v1/organization/{orgname}/quota endpoint: USD curl -X POST "https://<quay-server.example.com>/api/v1/organization/<orgname>/quota" \ -H "Authorization: Bearer <access_token>" \ -H "Content-Type: application/json" \ -d '{ "limit_bytes": 10737418240, "limits": "10 Gi" }' Example output "Created" Use the GET /api/v1/organization/{orgname}/quota command to see if your organization already has an established quota: USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq Example output [{"id": 1, "limit_bytes": 10737418240, "limit": "10.0 GiB", "default_config": false, "limits": [], "default_config_exists": false}] You can use the PUT /api/v1/organization/{orgname}/quota/{quota_id} command to modify the existing quota limitation. For example: USD curl -X PUT "https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>" \ -H "Authorization: Bearer <access_token>" \ -H "Content-Type: application/json" \ -d '{ "limit_bytes": <limit_in_bytes> }' Example output {"id": 1, "limit_bytes": 21474836480, "limit": "20.0 GiB", "default_config": false, "limits": [], "default_config_exists": false} 15.6.1. Pushing images To see the storage consumed, push various images to the organization. 15.6.1.1. Pushing ubuntu:18.04 Push ubuntu:18.04 to the organization from the command line: Sample commands USD podman pull ubuntu:18.04 USD podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 15.6.1.2. Using the API to view quota usage To view the storage consumed, GET data from the /api/v1/repository endpoint: Sample command USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true' | jq Sample output { "repositories": [ { "namespace": "testorg", "name": "ubuntu", "description": null, "is_public": false, "kind": "image", "state": "NORMAL", "quota_report": { "quota_bytes": 27959066, "configured_quota": 104857600 }, "last_modified": 1651225630, "popularity": 0, "is_starred": false } ] } 15.6.1.3. Pushing another image Pull, tag, and push a second image, for example, nginx : Sample commands USD podman pull nginx USD podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx To view the quota report for the repositories in the organization, use the /api/v1/repository endpoint: Sample command USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true' Sample output { "repositories": [ { "namespace": "testorg", "name": "ubuntu", "description": null, "is_public": false, "kind": "image", "state": "NORMAL", "quota_report": { "quota_bytes": 27959066, "configured_quota": 104857600 }, "last_modified": 1651225630, "popularity": 0, "is_starred": false }, { "namespace": "testorg", "name": "nginx", "description": null, "is_public": false, "kind": "image", "state": "NORMAL", "quota_report": { "quota_bytes": 59231659, "configured_quota": 104857600 }, "last_modified": 1651229507, "popularity": 0, "is_starred": false } ] } To view the quota information in the organization details, use the /api/v1/organization/{orgname} endpoint: Sample command USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg' | jq Sample output { "name": "testorg", ... "quotas": [ { "id": 1, "limit_bytes": 104857600, "limits": [] } ], "quota_report": { "quota_bytes": 87190725, "configured_quota": 104857600 } } 15.6.2. Rejecting pushes using quota limits If an image push exceeds defined quota limitations, a soft or hard check occurs: For a soft check, or warning , users are notified. For a hard check, or reject , the push is terminated. 15.6.2.1. Setting reject and warning limits To set reject and warning limits, POST data to the /api/v1/organization/{orgname}/quota/{quota_id}/limit endpoint: Sample reject limit command USD curl -k -X POST -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' -d '{"type":"Reject","threshold_percent":80}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit Sample warning limit command USD curl -k -X POST -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' -d '{"type":"Warning","threshold_percent":50}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit 15.6.2.2. Viewing reject and warning limits To view the reject and warning limits, use the /api/v1/organization/{orgname}/quota endpoint: View quota limits USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq Sample output for quota limits [ { "id": 1, "limit_bytes": 104857600, "default_config": false, "limits": [ { "id": 2, "type": "Warning", "limit_percent": 50 }, { "id": 1, "type": "Reject", "limit_percent": 80 } ], "default_config_exists": false } ] 15.6.2.3. Pushing an image when the reject limit is exceeded In this example, the reject limit (80%) has been set to below the current repository size (~83%), so the push should automatically be rejected. Push a sample image to the organization from the command line: Sample image push USD podman pull ubuntu:20.04 USD podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 Sample output when quota exceeded Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace 15.6.2.4. Notifications for limits exceeded When limits are exceeded, a notification appears: Quota notifications 15.7. Calculating the total registry size in Red Hat Quay 3.9 Use the following procedure to queue a registry total calculation. Note This feature is done on-demand, and calculating a registry total is database intensive. Use with caution. Prerequisites You have upgraded to Red Hat Quay 3.9. You are logged in as a Red Hat Quay superuser. Procedure On the Red Hat Quay UI, click your username Super User Admin Panel . In the navigation pane, click Manage Organizations . Click Calculate , to Total Registry Size: 0.00 KB, Updated: Never , Calculation required . Then, click Ok . After a few minutes, depending on the size of your registry, refresh the page. Now, the Total Registry Size should be calculated. For example: 15.8. Permanently deleting an image tag In some cases, users might want to delete an image tag outside of the time machine window. Use the following procedure to manually delete an image tag permanently. Important The results of the following procedure cannot be undone. Use with caution. 15.8.1. Permanently deleting an image tag using the Red Hat Quay v2 UI Use the following procedure to permanently delete an image tag using the Red Hat Quay v2 UI. Prerequisites You have set FEATURE_UI_V2 to true in your config.yaml file. Procedure Ensure that the PERMANENTLY_DELETE_TAGS and RESET_CHILD_MANIFEST_EXPIRATION parameters are set to true in your config.yaml file. For example: PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true In the navigation pane, click Repositories . Click the name of the repository, for example, quayadmin/busybox . Check the box of the image tag that will be deleted, for example, test . Click Actions Permanently Delete . Important This action is permanent and cannot be undone. 15.8.2. Permanently deleting an image tag using the Red Hat Quay legacy UI Use the following procedure to permanently delete an image tag using the Red Hat Quay legacy UI. Procedure Ensure that the PERMANENTLY_DELETE_TAGS and RESET_CHILD_MANIFEST_EXPIRATION parameters are set to true in your config.yaml file. For example: PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true On the Red Hat Quay UI, click Repositories and the name of the repository that contains the image tag you will delete, for example, quayadmin/busybox . In the navigation pane, click Tags . Check the box of the name of the tag you want to delete, for example, test . Click the Actions drop down menu and select Delete Tags Delete Tag . Click Tag History in the navigation pane. On the name of the tag that was just deleted, for example, test , click Delete test under the Permanently Delete category. For example: Permanently delete image tag Important This action is permanent and cannot be undone.
[ "FEATURE_QUOTA_MANAGEMENT: true FEATURE_GARBAGE_COLLECTION: true PERMANENTLY_DELETE_TAGS: true QUOTA_TOTAL_DELAY_SECONDS: 1800 1 RESET_CHILD_MANIFEST_EXPIRATION: true", "FEATURE_GARBAGE_COLLECTION: true FEATURE_QUOTA_MANAGEMENT: true QUOTA_BACKFILL: false QUOTA_TOTAL_DELAY_SECONDS: 0 PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true", "QUOTA_BACKFILL: true", "podman pull ubuntu:18.04", "podman tag docker.io/library/ubuntu:18.04 quay-server.example.com/quota-test/ubuntu:tag1", "podman tag docker.io/library/ubuntu:18.04 quay-server.example.com/quota-test/ubuntu:tag2", "podman push --tls-verify=false quay-server.example.com/quota-test/ubuntu:tag1", "podman push --tls-verify=false quay-server.example.com/quota-test/ubuntu:tag2", "podman pull ubuntu:18.04", "podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04", "podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04", "podman pull nginx", "podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx", "podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx", "podman pull ubuntu:20.04 podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04", "Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace", "curl -X POST \"https://<quay-server.example.com>/api/v1/organization/<orgname>/quota\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"limit_bytes\": 10737418240, \"limits\": \"10 Gi\" }'", "\"Created\"", "curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq", "[{\"id\": 1, \"limit_bytes\": 10737418240, \"limit\": \"10.0 GiB\", \"default_config\": false, \"limits\": [], \"default_config_exists\": false}]", "curl -X PUT \"https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"limit_bytes\": <limit_in_bytes> }'", "{\"id\": 1, \"limit_bytes\": 21474836480, \"limit\": \"20.0 GiB\", \"default_config\": false, \"limits\": [], \"default_config_exists\": false}", "podman pull ubuntu:18.04 podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04", "curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true' | jq", "{ \"repositories\": [ { \"namespace\": \"testorg\", \"name\": \"ubuntu\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"NORMAL\", \"quota_report\": { \"quota_bytes\": 27959066, \"configured_quota\": 104857600 }, \"last_modified\": 1651225630, \"popularity\": 0, \"is_starred\": false } ] }", "podman pull nginx podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx", "curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true'", "{ \"repositories\": [ { \"namespace\": \"testorg\", \"name\": \"ubuntu\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"NORMAL\", \"quota_report\": { \"quota_bytes\": 27959066, \"configured_quota\": 104857600 }, \"last_modified\": 1651225630, \"popularity\": 0, \"is_starred\": false }, { \"namespace\": \"testorg\", \"name\": \"nginx\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"NORMAL\", \"quota_report\": { \"quota_bytes\": 59231659, \"configured_quota\": 104857600 }, \"last_modified\": 1651229507, \"popularity\": 0, \"is_starred\": false } ] }", "curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg' | jq", "{ \"name\": \"testorg\", \"quotas\": [ { \"id\": 1, \"limit_bytes\": 104857600, \"limits\": [] } ], \"quota_report\": { \"quota_bytes\": 87190725, \"configured_quota\": 104857600 } }", "curl -k -X POST -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' -d '{\"type\":\"Reject\",\"threshold_percent\":80}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit", "curl -k -X POST -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' -d '{\"type\":\"Warning\",\"threshold_percent\":50}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit", "curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq", "[ { \"id\": 1, \"limit_bytes\": 104857600, \"default_config\": false, \"limits\": [ { \"id\": 2, \"type\": \"Warning\", \"limit_percent\": 50 }, { \"id\": 1, \"type\": \"Reject\", \"limit_percent\": 80 } ], \"default_config_exists\": false } ]", "podman pull ubuntu:20.04 podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04", "Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace", "PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true", "PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/manage_red_hat_quay/red-hat-quay-quota-management-and-enforcement
12.5. iSCSI-based Storage Pools
12.5. iSCSI-based Storage Pools This section covers using iSCSI-based devices to store guest virtual machines. iSCSI (Internet Small Computer System Interface) is a network protocol for sharing storage devices. iSCSI connects initiators (storage clients) to targets (storage servers) using SCSI instructions over the IP layer. 12.5.1. Configuring a Software iSCSI Target The scsi-target-utils package provides a tool for creating software-backed iSCSI targets. Procedure 12.4. Creating an iSCSI target Install the required packages Install the scsi-target-utils package and all dependencies Start the tgtd service The tgtd service host physical machines SCSI targets and uses the iSCSI protocol to host physical machine targets. Start the tgtd service and make the service persistent after restarting with the chkconfig command. Optional: Create LVM volumes LVM volumes are useful for iSCSI backing images. LVM snapshots and resizing can be beneficial for guest virtual machines. This example creates an LVM image named virtimage1 on a new volume group named virtstore on a RAID5 array for hosting guest virtual machines with iSCSI. Create the RAID array Creating software RAID5 arrays is covered by the Red Hat Enterprise Linux Deployment Guide . Create the LVM volume group Create a volume group named virtstore with the vgcreate command. Create a LVM logical volume Create a logical volume group named virtimage1 on the virtstore volume group with a size of 20GB using the lvcreate command. The new logical volume, virtimage1 , is ready to use for iSCSI. Optional: Create file-based images File-based storage is sufficient for testing but is not recommended for production environments or any significant I/O activity. This optional procedure creates a file based imaged named virtimage2.img for an iSCSI target. Create a new directory for the image Create a new directory to store the image. The directory must have the correct SELinux contexts. Create the image file Create an image named virtimage2.img with a size of 10GB. Configure SELinux file contexts Configure the correct SELinux context for the new image and directory. The new file-based image, virtimage2.img , is ready to use for iSCSI. Create targets Targets can be created by adding a XML entry to the /etc/tgt/targets.conf file. The target attribute requires an iSCSI Qualified Name (IQN). The IQN is in the format: Where: yyyy - mm represents the year and month the device was started (for example: 2010-05 ); reversed domain name is the host physical machines domain name in reverse (for example server1.example.com in an IQN would be com.example.server1 ); and optional identifier text is any text string, without spaces, that assists the administrator in identifying devices or hardware. This example creates iSCSI targets for the two types of images created in the optional steps on server1.example.com with an optional identifier trial . Add the following to the /etc/tgt/targets.conf file. Ensure that the /etc/tgt/targets.conf file contains the default-driver iscsi line to set the driver type as iSCSI. The driver uses iSCSI by default. Important This example creates a globally accessible target without access control. Refer to the scsi-target-utils for information on implementing secure access. Restart the tgtd service Restart the tgtd service to reload the configuration changes. iptables configuration Open port 3260 for iSCSI access with iptables . Verify the new targets View the new targets to ensure the setup was successful with the tgt-admin --show command. Warning The ACL list is set to all. This allows all systems on the local network to access this device. It is recommended to set host physical machine access ACLs for production environments. Optional: Test discovery Test whether the new iSCSI device is discoverable. Optional: Test attaching the device Attach the new device ( iqn.2010-05.com.example.server1:iscsirhel6guest ) to determine whether the device can be attached. Detach the device. An iSCSI device is now ready to use for virtualization.
[ "yum install scsi-target-utils", "service tgtd start chkconfig tgtd on", "vgcreate virtstore /dev/md1", "lvcreate --size 20G -n virtimage1 virtstore", "mkdir -p /var/lib/tgtd/ virtualization", "dd if=/dev/zero of=/var/lib/tgtd/ virtualization / virtimage2.img bs=1M seek=10000 count=0", "restorecon -R /var/lib/tgtd", "iqn. yyyy - mm . reversed domain name : optional identifier text", "<target iqn.2010-05.com.example. server1 : iscsirhel6guest > backing-store /dev/ virtstore / virtimage1 #LUN 1 backing-store /var/lib/tgtd/ virtualization / virtimage2.img #LUN 2 write-cache off </target>", "service tgtd restart", "iptables -I INPUT -p tcp -m tcp --dport 3260 -j ACCEPT service iptables save service iptables restart", "tgt-admin --show Target 1: iqn.2010-05.com.example.server1:iscsirhel6guest System information: Driver: iscsi State: ready I_T nexus information: LUN information: LUN: 0 Type: controller SCSI ID: IET 00010000 SCSI SN: beaf10 Size: 0 MB Online: Yes Removable media: No Backing store type: rdwr Backing store path: None LUN: 1 Type: disk SCSI ID: IET 00010001 SCSI SN: beaf11 Size: 20000 MB Online: Yes Removable media: No Backing store type: rdwr Backing store path: /dev/ virtstore / virtimage1 LUN: 2 Type: disk SCSI ID: IET 00010002 SCSI SN: beaf12 Size: 10000 MB Online: Yes Removable media: No Backing store type: rdwr Backing store path: /var/lib/tgtd/ virtualization / virtimage2.img Account information: ACL information: ALL", "iscsiadm --mode discovery --type sendtargets --portal server1.example.com 127.0.0.1:3260,1 iqn.2010-05.com.example.server1:iscsirhel6guest", "iscsiadm -d2 -m node --login scsiadm: Max file limits 1024 1024 Logging in to [iface: default, target: iqn.2010-05.com.example.server1:iscsirhel6guest, portal: 10.0.0.1,3260] Login to [iface: default, target: iqn.2010-05.com.example.server1:iscsirhel6guest, portal: 10.0.0.1,3260] successful.", "iscsiadm -d2 -m node --logout scsiadm: Max file limits 1024 1024 Logging out of session [sid: 2, target: iqn.2010-05.com.example.server1:iscsirhel6guest, portal: 10.0.0.1,3260 Logout of [sid: 2, target: iqn.2010-05.com.example.server1:iscsirhel6guest, portal: 10.0.0.1,3260] successful." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-Virtualization-Storage_Pools-Creating-iSCSI
5.2.30. /proc/uptime
5.2.30. /proc/uptime This file contains information detailing how long the system has been on since its last restart. The output of /proc/uptime is quite minimal: The first number is the total number of seconds the system has been up. The second number is how much of that time the machine has spent idle, in seconds.
[ "350735.47 234388.90" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-proc-uptime
Chapter 1. Managed cluster advanced configuration
Chapter 1. Managed cluster advanced configuration With Red Hat Advanced Cluster Management for Kubernetes klusterlet add-ons, you can further configure your managed clusters to improve performance and add functionality to your applications. See the following enablement options: Enabling klusterlet add-ons on clusters for cluster management Configuring nodeSelectors and tolerations for klusterlet add-ons Enabling cluster-wide proxy on existing cluster add-ons 1.1. Enabling klusterlet add-ons on clusters for cluster management After you install Red Hat Advanced Cluster Management for Kubernetes and then create or import clusters with multicluster engine operator you can enable the klusterlet add-ons for those managed clusters. The klusterlet add-ons are not enabled by default if you created or imported clusters unless you create or import with the Red Hat Advanced Cluster Management console. See the following available klusterlet add-ons: application-manager cert-policy-controller config-policy-controller iam-policy-controller governance-policy-framework search-collector Complete the following steps to enable the klusterlet add-ons for the managed clusters after Red Hat Advanced Cluster Management is installed: Create a YAML file that is similar to the following KlusterletAddonConfig , with the spec value that represents the add-ons: apiVersion: agent.open-cluster-management.io/v1 kind: KlusterletAddonConfig metadata: name: <cluster_name> namespace: <cluster_name> spec: applicationManager: enabled: true certPolicyController: enabled: true policyController: 1 enabled: true searchCollector: enabled: true 1 The policy-controller add-on is divided into two add-ons: The governance-policy-framework and the config-policy-controller . As a result, the policyController controls the governance-policy-framework and the config-policy-controller managedClusterAddons . Save the file as klusterlet-addon-config.yaml . Apply the YAML by running the following command on the hub cluster: To verify whether the enabled managedClusterAddons are created after the KlusterletAddonConfig is created, run the following command: 1.2. Configuring nodeSelectors and tolerations for klusterlet add-ons In Red Hat Advanced Cluster Management, you can configure nodeSelector and tolerations for the following klusterlet add-ons: application-manager cert-policy-controller cluster-proxy config-policy-controller governance-policy-framework hypershift-addon iam-policy-controller managed-serviceaccount observability-controller search-collector submariner volsync work-manager Complete the following steps: Use the AddonDeploymentConfig API to create a configuration to specify the nodeSelector and tolerations on a certain namespace on the hub cluster. Create a file named addondeploymentconfig.yaml that is based on the following template: apiVersion: addon.open-cluster-management.io/v1alpha1 kind: AddOnDeploymentConfig metadata: name: config-name 1 namespace: config-name-space 2 spec: nodePlacement: nodeSelector: node-selector 3 tolerations: tolerations 4 1 Replace config-name with the name of the AddonDeploymentConfig that you just created. 2 Replace config-namespace with the namespace of the AddonDeploymentConfig that you just created. 3 Replace node-selector with your node selector. 4 Replace tolerations with your tolerations. A completed AddOnDeployment file might resemble the following example: apiVersion: addon.open-cluster-management.io/v1alpha1 kind: AddOnDeploymentConfig metadata: name: deploy-config namespace: open-cluster-management-hub spec: nodePlacement: nodeSelector: "node-dedicated": "acm-addon" tolerations: - effect: NoSchedule key: node-dedicated value: acm-addon operator: Equal Run the following command to apply the file that you created: Use the configuration that you created as the global default configuration for your add-on by running the following command: Replace addon-name with your add-on name. Replace config-name with the name of the AddonDeploymentConfig that you just created. Replace config-namespace with the namespace of the AddonDeploymentConfig that you just created. The nodeSelector and tolerations that you specified are applied to all of your add-on on each of the managed clusters. You can also override the global default AddonDeploymentConfig configuration for your add-on on a certain managed cluster by using following steps: Use the AddonDeploymentConfig API to create another configuration to specify the nodeSelector and tolerations on the hub cluster. Link the new configuration that you created to your add-on ManagedClusterAddon on a managed cluster. Replace managed-cluster with your managed cluster name Replace addon-name with your add-on name Replace config-namespace with the namespace of the AddonDeploymentConfig that you just created Replace config-name with the name of the AddonDeploymentConfig that you just created The new configuration that you referenced in the add-on ManagedClusterAddon overrides the global default configuration that you previously defined in the ClusterManagementAddon add-on. To make sure your content is deployed to the correct nodes, complete the steps in Optional: Configuring the klusterlet to run on specific nodes . 1.3. Enabling cluster-wide proxy on existing cluster add-ons You can configure the KlusterletAddonConfig in the cluster namespace to add the proxy environment variables to all the klusterlet add-on pods of the managed Red Hat OpenShift Container Platform clusters. Complete the following steps to configure the KlusterletAddonConfig to add the three environment variables to the pods of the klusterlet add-ons: Edit the KlusterletAddonConfig file that is in the namespace of the cluster that needs the proxy. You can use the console to find the resource, or you can edit from the terminal with the following command: Note: If you are working with only one cluster, you do not need <my-cluster-name> at the end of your command. See the following command: Edit the .spec.proxyConfig section of the file so it resembles the following example. The spec.proxyConfig is an optional section: spec proxyConfig: httpProxy: "<proxy_not_secure>" 1 httpsProxy: "<proxy_secure>" 2 noProxy: "<no_proxy>" 3 1 Replace proxy_not_secure with the address of the proxy server for http requests. For example, use http://192.168.123.145:3128 . 2 Replace proxy_secure with the address of the proxy server for https requests. For example, use https://192.168.123.145:3128 . 3 Replace no_proxy with a comma delimited list of IP addresses, hostnames, and domain names where traffic is not routed through the proxy. For example, use .cluster.local,.svc,10.128.0.0/14,example.com . If the OpenShift Container Platform cluster is created with cluster wide proxy configured on the hub cluster, the cluster wide proxy configuration values are added to the pods of the klusterlet add-ons as environment variables when the following conditions are met: The .spec.policyController.proxyPolicy in the addon section is enabled and set to OCPGlobalProxy . The .spec.applicationManager.proxyPolicy is enabled and set to CustomProxy . Note: The default value of proxyPolicy in the addon section is Disabled . See the following examples of proxyPolicy entries: apiVersion: agent.open-cluster-management.io/v1 kind: KlusterletAddonConfig metadata: name: clusterName namespace: clusterName spec: proxyConfig: httpProxy: http://pxuser:[email protected]:3128 httpsProxy: http://pxuser:[email protected]:3128 noProxy: .cluster.local,.svc,10.128.0.0/14, example.com applicationManager: enabled: true proxyPolicy: CustomProxy policyController: enabled: true proxyPolicy: OCPGlobalProxy searchCollector: enabled: true proxyPolicy: Disabled certPolicyController: enabled: true proxyPolicy: Disabled Important: Global proxy settings do not impact alert forwarding. To set up alert forwarding for Red Hat Advanced Cluster Management hub clusters with a cluster-wide proxy, see Forwarding alerts for more details.
[ "apiVersion: agent.open-cluster-management.io/v1 kind: KlusterletAddonConfig metadata: name: <cluster_name> namespace: <cluster_name> spec: applicationManager: enabled: true certPolicyController: enabled: true policyController: 1 enabled: true searchCollector: enabled: true", "apply -f klusterlet-addon-config.yaml", "get managedclusteraddons -n <cluster namespace>", "apiVersion: addon.open-cluster-management.io/v1alpha1 kind: AddOnDeploymentConfig metadata: name: config-name 1 namespace: config-name-space 2 spec: nodePlacement: nodeSelector: node-selector 3 tolerations: tolerations 4", "apiVersion: addon.open-cluster-management.io/v1alpha1 kind: AddOnDeploymentConfig metadata: name: deploy-config namespace: open-cluster-management-hub spec: nodePlacement: nodeSelector: \"node-dedicated\": \"acm-addon\" tolerations: - effect: NoSchedule key: node-dedicated value: acm-addon operator: Equal", "apply -f addondeploymentconfig", "patch clustermanagementaddons <addon-name> --type='json' -p='[{\"op\":\"add\", \"path\":\"/spec/supportedConfigs\", \"value\":[{\"group\":\"addon.open-cluster-management.io\",\"resource\":\"addondeploymentconfigs\", \"defaultConfig\":{\"name\":\"deploy-config\",\"namespace\":\"open-cluster-management-hub\"}}]}]'", "-n <managed-cluster> patch managedclusteraddons <addon-name> --type='json' -p='[{\"op\":\"add\", \"path\":\"/spec/configs\", \"value\":[ {\"group\":\"addon.open-cluster-management.io\",\"resource\":\"addondeploymentconfigs\",\"namespace\":\"<config-namespace>\",\"name\":\"<config-name>\"} ]}]'", "-n <my-cluster-name> edit klusterletaddonconfig <my-cluster-name>", "-n <my-cluster-name> edit klusterletaddonconfig", "spec proxyConfig: httpProxy: \"<proxy_not_secure>\" 1 httpsProxy: \"<proxy_secure>\" 2 noProxy: \"<no_proxy>\" 3", "apiVersion: agent.open-cluster-management.io/v1 kind: KlusterletAddonConfig metadata: name: clusterName namespace: clusterName spec: proxyConfig: httpProxy: http://pxuser:[email protected]:3128 httpsProxy: http://pxuser:[email protected]:3128 noProxy: .cluster.local,.svc,10.128.0.0/14, example.com applicationManager: enabled: true proxyPolicy: CustomProxy policyController: enabled: true proxyPolicy: OCPGlobalProxy searchCollector: enabled: true proxyPolicy: Disabled certPolicyController: enabled: true proxyPolicy: Disabled" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/add-ons/acm-managed-adv-config
B.4. Logging In and Authentication Problems
B.4. Logging In and Authentication Problems B.4.1. Kerberos GSS Failures When Running ipa Commands Immediately after installing a server, Kerberos errors occur when attempting to run an ipa command. For example: What this means: DNS is not properly configured. To fix the problem: Verify your DNS configuration. See Section 2.1.5, "Host Name and DNS Configuration" for DNS requirements for IdM servers. See DNS and Realm Settings in the Windows Integration Guide for DNS requirements for Active Directory trust. B.4.2. SSH Connection Fails when Using GSS-API Users are unable to log in to IdM machines using SSH. What this means: When SSH attempts to connect to an IdM resource using GSS-API as the security method, GSS-API first verifies the DNS records. SSH failures are often caused by incorrect reverse DNS entries. The incorrect records prevent SSH from locating the IdM resource. To fix the problem: Verify your DNS configuration as described in Section 2.1.5, "Host Name and DNS Configuration" . As a temporary workaround, you can also disable reverse DNS lookups in the SSH configuration. To do this, set the GSSAPITrustDNS to no in the /etc/ssh/ssh_config file. Instead of using reverse DNS records, SSH will pass the given user name directly to GSS-API. B.4.3. OTP Token Out of Sync Authentication using OTP fails because the token is desynchronized. To fix the problem: Resynchronize the token. Any user can resynchronize their tokens regardless of the token type and whether or not the user has permission to modify the token settings. In the IdM web UI: Click Sync OTP Token on the login page. Figure B.1. Sync OTP Token From the command line: Run the ipa otptoken-sync command. Provide the information required to resynchronize the token. For example, IdM will ask you to provide your standard password and two subsequent token codes generated by the token. Note Resynchronization works even if the standard password is expired. After the token is resynchronized using an expired password, log in to IdM to let the system prompt you to change the password. B.4.4. Smart Card Authentication Fails with Timeout Error Messages The sssd_pam.log and sssd_ EXAMPLE.COM .log files contain timeout error messages, such as these: What this means: When using forwarded smart card readers or the Online Certificate Status Protocol (OCSP), you might need to adjust certain default values for users to be able to authenticate with smart cards. To fix the problem: On the server and on the client from which you want users to authenticate, make these changes in the /etc/sssd/sssd.conf file: In the [pam] section, increase the p11_child_timeout value to 60 seconds. In the [domain/ EXAMPLE.COM ] section, increase the krb5_auth_timeout value to 60 seconds. If you are using OCSP in the certificate, make sure the OCSP server is reachable. If the OCSP server is not directly reachable, configure a proxy OCSP server by adding the following options to /etc/sssd/sssd.conf : Replace nickname with the nickname of the OCSP signing certificate in the /etc/pki/nssdb/ directory. For details on these options, see the sssd.conf (5) man page. Restart SSSD:
[ "ipa: ERROR: Kerberos error: ('Unspecified GSS failure. Minor code may provide more information', 851968)/('Decrypt integrity check failed', -1765328353)", "Wed Jun 14 18:24:03 2017) [sssd[pam]] [child_handler_setup] (0x2000): Setting up signal handler up for pid [12370] (Wed Jun 14 18:24:03 2017) [sssd[pam]] [child_handler_setup] (0x2000): Signal handler set up for pid [12370] (Wed Jun 14 18:24:08 2017) [sssd[pam]] [pam_initgr_cache_remove] (0x2000): [idmeng] removed from PAM initgroup cache (Wed Jun 14 18:24:13 2017) [sssd[pam]] [p11_child_timeout] (0x0020): Timeout reached for p11_child. (Wed Jun 14 18:24:13 2017) [sssd[pam]] [pam_forwarder_cert_cb] (0x0040): get_cert request failed. (Wed Jun 14 18:24:13 2017) [sssd[pam]] [pam_reply] (0x0200): pam_reply called with result [4]: System error.", "certificate_verification = ocsp_default_responder= http://ocsp.proxy.url , ocsp_default_responder_signing_cert= nickname", "systemctl restart sssd.service" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/trouble-authentication
Chapter 1. Preparing to install on IBM Z(R) and {linuxoneProductName}
Chapter 1. Preparing to install on IBM Z(R) and {linuxoneProductName} 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Before you begin the installation process, you must clean the installation directory. This ensures that the required installation files are created and updated during the installation process. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note While this document refers only to IBM Z, all information in it also applies to IBM(R) LinuxONE. 1.2. Choosing a method to install OpenShift Container Platform on IBM Z or IBM(R) LinuxONE You can install OpenShift Container Platform with the Assisted Installer . This method requires no setup for the installer, and is ideal for connected environments like IBM Z. See Installing an on-premise cluster using the Assisted Installer for additional details. Note Installing OpenShift Container Platform with the Assisted Installer on IBM Z is supported only with RHEL KVM installations. You can also install OpenShift Container Platform on infrastructure that you provide. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See the Installation process for more information about Assisted Installer and user-provisioned installation processes. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the IBM Z platform and the installation process of OpenShift Container Platform. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods. 1.2.1. User-provisioned infrastructure installation of OpenShift Container Platform on IBM Z User-provisioned infrastructure requires the user to provision all resources required by OpenShift Container Platform. Installing a cluster with z/VM on IBM Z and IBM(R) LinuxONE : You can install OpenShift Container Platform with z/VM on IBM Z or IBM(R) LinuxONE infrastructure that you provision. Installing a cluster with z/VM on IBM Z and IBM(R) LinuxONE in a restricted network : You can install OpenShift Container Platform with z/VM on IBM Z or IBM(R) LinuxONE infrastructure that you provision in a restricted or disconnected network, by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. Installing a cluster with RHEL KVM on IBM Z and IBM(R) LinuxONE : You can install OpenShift Container Platform with KVM on IBM Z or IBM(R) LinuxONE infrastructure that you provision. Installing a cluster with RHEL KVM on IBM Z and IBM(R) LinuxONE in a restricted network : You can install OpenShift Container Platform with RHEL KVM on IBM Z or IBM(R) LinuxONE infrastructure that you provision in a restricted or disconnected network, by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_ibm_z_and_ibm_linuxone/preparing-to-install-on-ibm-z
16.4.4. Shell Scripting with guestfish
16.4.4. Shell Scripting with guestfish Once you are familiar with using guestfish interactively, according to your needs, writing shell scripts with it may be useful. The following is a simple shell script to add a new MOTD (message of the day) to a guest:
[ "#!/bin/bash - set -e guestname=\"USD1\" guestfish -d \"USDguestname\" -i <<'EOF' write /etc/motd \"Welcome to Acme Incorporated.\" chmod 0644 /etc/motd EOF" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-shell-scripting-with-guestfish
Chapter 7. Red Hat Process Automation Manager reference implementations
Chapter 7. Red Hat Process Automation Manager reference implementations Red Hat Process Automation Manager provides reference implementations that you can use as starter applications. They are included in the Red Hat Process Automation Manager 7.13.5 Reference Implementations download, available on the Red Hat Process Automation Manager Software Downloads page in the Red Hat Customer Portal. Employee Rostering reference implementation The employee rostering reference implementation enables you to create an application that assigns employees to shifts on various positions in an organization. For example, you can use the application to distribute shifts in a hospital between nurses, guard duty shifts across a number of locations, or shifts on an assembly line between workers. Vehicle route planning reference implementation The vehicle route planning reference implementation enables you to create an application that solves a vehicle route planning problem with real-world maps, roads, and vehicles delivering goods to locations, each with a specific capacity. For more information, see the README file in the vehicle routing ZIP file, included in the reference implementation download. School timetable reference implementation The school timetable reference implementation enables you to build a REST application on Spring Boot that associates lessons with rooms and time slots and avoids conflicts by considering student and teacher constraints. High available event-driven decisioning reference implementation The high available event-driven decisioning reference implementation enables you to deploy Drools engine code that requires stateful processing, including rules developed with complex event processing, in an OpenShift environment. Doing this enables the decision engine to process complex event series with high availability.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/reference-implementations-con_planning
function::task_state
function::task_state Name function::task_state - The state of the task Synopsis Arguments task task_struct pointer Description Return the state of the given task, one of: TASK_RUNNING (0), TASK_INTERRUPTIBLE (1), TASK_UNINTERRUPTIBLE (2), TASK_STOPPED (4), TASK_TRACED (8), EXIT_ZOMBIE (16), or EXIT_DEAD (32).
[ "task_state:long(task:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-task-state
2.2. File System Fragmentation
2.2. File System Fragmentation While there is no defragmentation tool for GFS2 on Red Hat Enterprise Linux, you can defragment individual files by identifying them with the filefrag tool, copying them to temporary files, and renaming the temporary files to replace the originals.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/global_file_system_2/s1-filefragment-gfs2
6.2. I/O Scheduling with Red Hat Enterprise Linux as a Virtualization Guest
6.2. I/O Scheduling with Red Hat Enterprise Linux as a Virtualization Guest You can use I/O scheduling on a Red Hat Enterprise Linux guest virtual machine, regardless of the hypervisor on which the guest is running. The following is a list of benefits and issues that should be considered: Red Hat Enterprise Linux guests often benefit greatly from using the noop scheduler. The scheduler merges small requests from the guest operating system into larger requests before sending the I/O to the hypervisor. This enables the hypervisor to process the I/O requests more efficiently, which can significantly improve the guest's I/O performance. Depending on the workload I/O and how storage devices are attached, schedulers like deadline can be more beneficial than noop . Red Hat recommends performance testing to verify which scheduler offers the best performance impact. Guests that use storage accessed by iSCSI, SR-IOV, or physical device passthrough should not use the noop scheduler. These methods do not allow the host to optimize I/O requests to the underlying physical device. Note In virtualized environments, it is sometimes not beneficial to schedule I/O on both the host and guest layers. If multiple guests use storage on a file system or block device managed by the host operating system, the host may be able to schedule I/O more efficiently because it is aware of requests from all guests. In addition, the host knows the physical layout of storage, which may not map linearly to the guests' virtual storage. All scheduler tuning should be tested under normal operating conditions, as synthetic benchmarks typically do not accurately compare performance of systems using shared resources in virtual environments. 6.2.1. Configuring the I/O Scheduler for Red Hat Enterprise Linux 7 The default scheduler used on a Red Hat Enterprise Linux 7 system is deadline . However, on a Red Hat Enterprise Linux 7 guest machine, it may be beneficial to change the scheduler to noop , by doing the following: In the /etc/default/grub file, change the elevator=deadline string on the GRUB_CMDLINE_LINUX line to elevator=noop . If there is no elevator= string, add elevator=noop at the end of the line. The following shows the /etc/default/grub file after a successful change. Rebuild the /boot/grub2/grub.cfg file. On a BIOS-based machine: On an UEFI-based machine:
[ "cat /etc/default/grub [...] GRUB_CMDLINE_LINUX=\"crashkernel=auto rd.lvm.lv=vg00/lvroot rhgb quiet elevator=noop\" [...]", "grub2-mkconfig -o /boot/grub2/grub.cfg", "grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-io-scheduler-guest
Chapter 8. LDAP Authentication Tutorial
Chapter 8. LDAP Authentication Tutorial Abstract This tutorial explains how to set up an X.500 directory server and configure the OSGi container to use LDAP authentication. 8.1. Tutorial Overview Goals In this tutorial you will: Install 389 Directory Server Add user entries to the LDAP server Add groups to manage security roles Configure Fuse to use LDAP authentication Configure Fuse to use roles for authorization Configure SSL/TLS connections to the LDAP server 8.2. Set-up a Directory Server and Console This stage of the tutorial explains how to install the X.500 directory server and the management console from the Fedora 389 Directory Server project. If you already have access to a 389 Directory Server instance, you can skip the instructions for installing the 389 Directory Server and install the 389 Management Console instead. Prerequisites If you are installing on a Red Hat Enterprise Linux platform, you must first install the Extra Packages for Enterprise Linux (EPEL) . See the installation notes under RHEL/Cent OS/ EPEL ( RHEL 6, RHEL 7, Cent OS 6, Cent OSy7) on the fedoraproject.org site. Install 389 Directory Server If you do not have access to an existing 389 Directory Server instance, you can install 389 Directory Server on your local machine, as follows: On Red Hat Enterprise Linux and Fedora platforms, use the standard dnf package management utility to install 389 Directory Server . Enter the following command at a command prompt (you must have administrator privileges on your machine): Note The required 389-ds and 389-console RPM packages are available for Fedora, RHEL6+EPEL, and CentOS7+EPEL platforms. At the time of writing, the 389-console package is not yet available for RHEL 7. After installing the 389 directory server packages, enter the following command to configure the directory server: The script is interactive and prompts you to provide the basic configuration settings for the 389 directory server. When the script is complete, it automatically launches the 389 directory server in the background. For more details about how to install 389 Directory Server , see the Download page. Install 389 Management Console If you already have access to a 389 Directory Server instance, you only need to install the 389 Management Console, which enables you to log in and manage the server remotely. You can install the 389 Management Console, as follows: On Red Hat Enterprise Linux and Fedora platforms -use the standard dnf package management utility to install the 389 Management Console. Enter the following command at a command prompt (you must have administrator privileges on your machine): On Windows platforms -see the Windows Console download instructions from fedoraproject.org . Connect the console to the server To connect the 389 Directory Server Console to the LDAP server: Enter the following command to start up the 389 Management Console: A login dialog appears. Fill in the LDAP login credentials in the User ID and Password fields, and customize the hostname in the Administration URL field to connect to your 389 management server instance (port 9830 is the default port for the 389 management server instance). The 389 Management Console window appears. Select the Servers and Applications tab. In the left-hand pane, drill down to the Directory Server icon. Select the Directory Server icon in the left-hand pane and click Open , to open the 389 Directory Server Console . In the 389 Directory Server Console , click the Directory tab, to view the Directory Information Tree (DIT). Expand the root node, YourDomain (usually named after a hostname, and shown as localdomain in the following screenshot), to view the DIT. 8.3. Add User Entries to the Directory Server The basic prerequisite for using LDAP authentication with the OSGi container is to have an X.500 directory server running and configured with a collection of user entries. For many use cases, you will also want to configure a number of groups to manage user roles. Alternative to adding user entries If you already have user entries and groups defined in your LDAP server, you might prefer to map the existing LDAP groups to JAAS roles using the roles.mapping property in the LDAPLoginModule configuration, instead of creating new entries. For details, see Section 2.1.7, "JAAS LDAP Login Module" . Goals In this portion of the tutorial you will Add three user entries to the LDAP server Add four groups to the LDAP server Adding user entries Perform the following steps to add user entries to the directory server: Ensure that the LDAP server and console are running. See Section 8.2, "Set-up a Directory Server and Console" . In the Directory Server Console , click on the Directory tab, and drill down to the People node, under the YourDomain node (where YourDomain is shown as localdomain in the following screenshots). Right-click the People node, and select menu:[ > New > > User > ] from the context menu, to open the Create New User dialog. Select the User tab in the left-hand pane of the Create New User dialog. Fill in the fields of the User tab, as follows: Set the First Name field to John . Set the Last Name field to Doe . Set the User ID field to jdoe . Enter the password, secret , in the Password field. Enter the password, secret , in the Confirm Password field. Click OK . Add a user Jane Doe by following Step 3 to Step 6 . In Step 5.e , use janedoe for the new user's User ID and use the password, secret , for the password fields. Add a user Camel Rider by following Step 3 to Step 6 . In Step 5.e , use crider for the new user's User ID and use the password, secret , for the password fields. Adding groups for the roles To add the groups that define the roles: In the Directory tab of the Directory Server Console , drill down to the Groups node, under the YourDomain node. Right-click the Groups node, and select menu:[ > New > > Group > ] from the context menu, to open the Create New Group dialog. Select the General tab in the left-hand pane of the Create New Group dialog. Fill in the fields of the General tab, as follows: Set the Group Name field to admin . Optionally, enter a description in the Description field. Select the Members tab in the left-hand pane of the Create New Group dialog. Click Add to open the Search users and groups dialog. In the Search field, select Users from the drop-down menu, and click the Search button. From the list of users that is now displayed, select John Doe . Click OK , to close the Search users and groups dialog. Click OK , to close the Create New Group dialog. Add a manager role by following Step 2 to Step 10 . In Step 4 , enter manager in the Group Name field. In Step 8 , select Jane Doe . Add a viewer role by following Step 2 to Step 10 . In Step 4 , enter viewer in the Group Name field. In Step 8 , select Camel Rider . Add an ssh role by following Step 2 to Step 10 . In Step 4 , enter ssh in the Group Name field. In Step 8 , select all of the users, John Doe , Jane Doe , and Camel Rider . 8.4. Enable LDAP Authentication in the OSGi Container This section explains how to configure an LDAP realm in the OSGi container. The new realm overrides the default karaf realm, so that the container authenticates credentials based on user entries stored in the X.500 directory server. References More detailed documentation is available on LDAP authentication, as follows: LDAPLoginModule options -are described in detail in Section 2.1.7, "JAAS LDAP Login Module" . Configurations for other directory servers -this tutorial covers only 389-DS . For details of how to configure other directory servers, such as Microsoft Active Directory, see the section called "Filter settings for different directory servers" . Procedure for standalone OSGi container To enable LDAP authentication in a standalone OSGi container: Ensure that the X.500 directory server is running. Start the Karaf container by entering the following command in a terminal window: Create a file called ldap-module.xml . Copy Example 8.1, "JAAS Realm for Standalone" into ldap-module.xml . Example 8.1. JAAS Realm for Standalone <?xml version="2.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:jaas="http://karaf.apache.org/xmlns/jaas/v1.0.0" xmlns:ext="http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0"> <jaas:config name="karaf" rank="200"> <jaas:module className="org.apache.karaf.jaas.modules.ldap.LDAPLoginModule" flags="required"> initialContextFactory=com.sun.jndi.ldap.LdapCtxFactory connection.url=ldap://localhost:389 connection.username=cn=Directory Manager connection.password=DIRECTORY_MANAGER_PASSWORD connection.protocol= user.base.dn=ou=People,dc=localdomain user.filter=(&amp;(objectClass=inetOrgPerson)(uid=%u)) user.search.subtree=true role.base.dn=ou=Groups,dc=localdomain role.name.attribute=cn role.filter=(uniquemember=%fqdn) role.search.subtree=true authentication=simple </jaas:module> </jaas:config> </blueprint> You must customize the following settings in the ldap-module.xml file: connection.url Set this URL to the actual location of your directory server instance. Normally, this URL has the format, ldap:// Hostname : Port . For example, the default port for the 389 Directory Server is IP port 389 . connection.username Specifies the username that is used to authenticate the connection to the directory server. For 389 Directory Server, the default is usually cn=Directory Manager . connection.password Specifies the password part of the credentials for connecting to the directory server. authentication You can specify either of the following alternatives for the authentication protocol: simple implies that user credentials are supplied and you are obliged to set the connection.username and connection.password options in this case. none implies that authentication is not performed. You must not set the connection.username and connection.password options in this case. This login module creates a JAAS realm called karaf , which is the same name as the default JAAS realm used by Fuse. By redefining this realm with a rank attribute value greater than 0 , it overrides the standard karaf realm which has the rank 0 . For more details about how to configure Fuse to use LDAP, see Section 2.1.7, "JAAS LDAP Login Module" . Important When setting the JAAS properties above, do not enclose the property values in double quotes. To deploy the new LDAP module, copy the ldap-module.xml into the Karaf container's deploy/ directory (hot deploy). The LDAP module is automatically activated. Note Subsequently, if you need to undeploy the LDAP module, you can do so by deleting the ldap-module.xml file from the deploy/ directory while the Karaf container is running . Test the LDAP authentication Test the new LDAP realm by connecting to the running container using the Karaf client utility, as follows: Open a new command prompt. Change directory to the Karaf InstallDir /bin directory. Enter the following command to log on to the running container instance using the identity jdoe : You should successfully log into the container's remote console. At the command console, type jaas: followed by the [Tab] key (to activate content completion): You should see that jdoe has access to all of the jaas commands (consistent with the admin role). Log off the remote console by entering the logout command. Enter the following command to log on to the running container instance using the identity janedoe : You should successfully log into the container's remote console. At the command console, type jaas: followed by the [Tab] key (to activate content completion): You should see that janedoe has access to almost all of the jaas commands (consistent with the manager role). Log off the remote console by entering the logout command. Enter the following command to log on to the running container instance using the identity crider : You should successfully log into the container's remote console. At the command console, type jaas: followed by the [Tab] key (to activate content completion): You should see that crider has access to only five of the jaas commands (consistent with the viewer role). Log off the remote console by entering the logout command. Troubleshooting If you run into any difficulties while testing the LDAP connection, increase the logging level to DEBUG to get a detailed trace of what is happening on the connection to the LDAP server. Perform the following steps: From the Karaf console, enter the following command to increase the logging level to DEBUG : Observe the Karaf log in real time: To escape from the log listing, type Ctrl-C.
[ "sudo dnf install 389-ds", "sudo setup-ds-admin.pl", "sudo dnf install 389-console", "389-console", "./bin/fuse", "<?xml version=\"2.0\" encoding=\"UTF-8\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:jaas=\"http://karaf.apache.org/xmlns/jaas/v1.0.0\" xmlns:ext=\"http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0\"> <jaas:config name=\"karaf\" rank=\"200\"> <jaas:module className=\"org.apache.karaf.jaas.modules.ldap.LDAPLoginModule\" flags=\"required\"> initialContextFactory=com.sun.jndi.ldap.LdapCtxFactory connection.url=ldap://localhost:389 connection.username=cn=Directory Manager connection.password=DIRECTORY_MANAGER_PASSWORD connection.protocol= user.base.dn=ou=People,dc=localdomain user.filter=(&amp;(objectClass=inetOrgPerson)(uid=%u)) user.search.subtree=true role.base.dn=ou=Groups,dc=localdomain role.name.attribute=cn role.filter=(uniquemember=%fqdn) role.search.subtree=true authentication=simple </jaas:module> </jaas:config> </blueprint>", "./client -u jdoe -p secret", "jdoe@root()> jaas: Display all 31 possibilities? (31 lines)? jaas:cancel jaas:group-add jaas:whoami", "./client -u janedoe -p secret", "janedoe@root()> jaas: Display all 25 possibilities? (25 lines)? jaas:cancel jaas:group-add jaas:users", "./client -u crider -p secret", "crider@root()> jaas: jaas:manage jaas:realm-list jaas:realm-manage jaas:realms jaas:user-list jaas:users", "log:set DEBUG", "log:tail" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_security_guide/FESBLDAPTutorial
9.4. Configuring Authentication and Role Mapping using JBoss EAP Login Modules
9.4. Configuring Authentication and Role Mapping using JBoss EAP Login Modules When using Red Hat JBoss EAP log in module for querying roles from LDAP, you must implement your own mapping of Principals to Roles, as JBoss EAP uses its own custom classes. The following example demonstrates how to map a principal obtained from JBoss EAP login module to a role. It maps user principal name to a role, performing a similar action to the IdentityRoleMapper : Example 9.1. Mapping a Principal from JBoss EAP's Login Module Example 9.2. Example of JBoss EAP LDAP login module configuration Example 9.3. Example of JBoss EAP Login Module Configuration When using GSSAPI authentication, this would typically involve using LDAP for role mapping, with JBoss EAP server authenticating itself to the LDAP server via GSSAPI. For more information on how to configure this, see the JBoss EAP Administration and Configuration Guide . Important For information about how to configure JBoss EAP login modules, see the JBoss EAP Administration and Configuration Guide and see the Red Hat Directory Server Administration Guide how to configure LDAP server, and specify users and their role mapping. Report a bug
[ "public class SimplePrincipalGroupRoleMapper implements PrincipalRoleMapper { @Override public Set<String> principalToRoles(Principal principal) { if (principal instanceof SimpleGroup) { Enumeration<Principal> members = ((SimpleGroup) principal).members(); if (members.hasMoreElements()) { Set<String> roles = new HashSet<String>(); while (members.hasMoreElements()) { Principal innerPrincipal = members.nextElement(); if (innerPrincipal instanceof SimplePrincipal) { SimplePrincipal sp = (SimplePrincipal) innerPrincipal; roles.add(sp.getName()); } } return roles; } } return null; } }", "<security-domain name=\"ispn-secure\" cache-type=\"default\"> <authentication> <login-module code=\"org.jboss.security.auth.spi.LdapLoginModule\" flag=\"required\"> <module-option name=\"java.naming.factory.initial\" value=\"com.sun.jndi.ldap.LdapCtxFactory\"/> <module-option name=\"java.naming.provider.url\" value=\"ldap://localhost:389\"/> <module-option name=\"java.naming.security.authentication\" value=\"simple\"/> <module-option name=\"principalDNPrefix\" value=\"uid=\"/> <module-option name=\"principalDNSuffix\" value=\",ou=People,dc=infinispan,dc=org\"/> <module-option name=\"rolesCtxDN\" value=\"ou=Roles,dc=infinispan,dc=org\"/> <module-option name=\"uidAttributeID\" value=\"member\"/> <module-option name=\"matchOnUserDN\" value=\"true\"/> <module-option name=\"roleAttributeID\" value=\"cn\"/> <module-option name=\"roleAttributeIsDN\" value=\"false\"/> <module-option name=\"searchScope\" value=\"ONELEVEL_SCOPE\"/> </login-module> </authentication> </security-domain>", "<security-domain name=\"krb-admin\" cache-type=\"default\"> <authentication> <login-module code=\"Kerberos\" flag=\"required\"> <module-option name=\"useKeyTab\" value=\"true\"/> <module-option name=\"principal\" value=\"[email protected]\"/> <module-option name=\"keyTab\" value=\"USD{basedir}/keytab/admin.keytab\"/> </login-module> </authentication> </security-domain>" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/Configuring_Authentication_and_Role_Mapping_using_JBoss_EAP_Login_Modules
Chapter 3. Deploying an Identity Management Replica in a Container
Chapter 3. Deploying an Identity Management Replica in a Container This chapter describes how you can install an Identity Management replica. For example, creating a container-based replica can be useful if you want to gradually transfer the workload in your existing topology to container-based servers. Before you begin, read Section 3.1, "Prerequisites" and Section 3.2, "Available Configuration in Server and Replica Containers" . Choose one of the following installation procedures. If you are not sure which certificate authority (CA) configuration fits your situation, see Determining What CA Configuration to Use in the Linux Domain Identity, Authentication, and Policy Guide . Section 3.3, "Installing an Identity Management Replica in a Container: Basic Installation" Section 3.4, "Installing an Identity Management Replica in a Container: Without a CA" After you are done, read Section 3.5, " Steps After Installation" . 3.1. Prerequisites Upgrade the Atomic Host system before installing the container. See Upgrading and Downgrading in the Red Hat Enterprise Linux Atomic Host 7 Installation and Configuration Guide . 3.2. Available Configuration in Server and Replica Containers What Is Available Domain level 1 or higher Domain level 0 is not available for containers. See also Displaying and Raising the Domain Level . As a consequence, servers running in containers can be joined in a replication agreement only with Identity Management servers based on Red Hat Enterprise Linux 7.3 or later. Mixed container and non-container deployments A single Identity Management domain topology can include both container-based and RPM-based servers. What Is Not Available Changing server components in a deployed container Do not make runtime modifications of deployed containers. If you need to change or reinstall a server component, such as integrated DNS or Vault, create a new replica. Upgrading between different Linux distributions Do not change the platform on which an ipa-server container image runs. For example, do not change an image running on Red Hat Enterprise Linux to Fedora, Ubuntu, or CentOS. Similarly, do not change an image running on Fedora, Ubuntu, or CentOS to Red Hat Enterprise Linux. Identity Management supports only upgrades to later versions of Red Hat Enterprise Linux. Downgrading the system with a running container Do not downgrade the system on which an ipa-server container image runs. Upstream containers on Atomic Host Do not install upstream container images, such as the FreeIPA ipa-server image, on Atomic Host. Install only the container images available in Red Hat Enterprise Linux. Multiple containers on a single Atomic Host Install only one ipa-server container image on a single Atomic Host. 3.3. Installing an Identity Management Replica in a Container: Basic Installation This procedure shows how to install a containerized Identity Management server in the default certificate authority (CA) configuration with an integrated CA. Before You Start Note that the container installation uses the same default configuration as a non-container installation using ipa-replica-install . To specify custom configuration, add additional options to the atomic install command used in the procedure below: Atomic options available for the ipa-server container. For a complete list, see the container help page. Identity Management installer options accepted by ipa-replica-install , described in Installing and Uninstalling Identity Management Replicas in the Linux Domain Identity, Authentication, and Policy Guide . You must have an installed server available: either on a bare metal machine, or on another Atomic Host system. Procedure If you want to install a replica against a master server in a container, enable two-way communication to the master container over the ports specified in Installing and Uninstalling an Identity Management Server in the Linux Domain Identity, Authentication, and Policy Guide . Use the atomic install rhel7/ipa-server publish --hostname fully_qualified_domain_name ipa-replica-install command to start the installation. Include the --server and --domain options to specify the host name and domain name of your Identity Management server. The container requires its own host name. Use a different host name for the container than the host name of the Atomic Host system. The container's host name must be resolvable via DNS or the /etc/hosts file. Note Installing a server or replica container does not enroll the Atomic Host system itself to the Identity Management domain. If you use the Atomic Host system's host name for the server or replica, you will be unable to enroll the Atomic Host system later. Important Always use the --hostname option with atomic install when installing the server or replica container. Because --hostname is considered an Atomic option in this case, not an Identity Management installer option, use it before the ipa-server-install option. The installation ignores --hostname when used after ipa-server-install . If you are installing a server with integrated DNS, add also the --ip-address option to specify the public IP address of the Atomic Host that is reachable from the network. You can use --ip-address multiple times. Due to a known issue in the interactive replica installation mode , add standard ipa-replica-install options to specify one of the following: A privileged user's credentials. See Example 3.1, "Installation Command Examples" . Random password for bulk enrollment. See Installing a Replica Using a Random Password in the Linux Domain Identity, Authentication, and Policy Guide . Warning Unless you want to install the container for testing purposes only, always use the publish option. Without publish , no ports will be published to the Atomic Host system, and the server will not be reachable from outside the container. Example 3.1. Installation Command Examples Command syntax for installing the ipa-server container: To install a replica container named replica-container using the administrator's credentials, while using default values for the Identity Management replica settings: 3.4. Installing an Identity Management Replica in a Container: Without a CA This procedure describes how to install a server without an integrated Identity Management certificate authority (CA). A containerized Identity Management server and the Atomic Host system share only the parts of the file system that are mounted using a bind mount into the container. Therefore, operations related to external files must be performed from within this volume. The ipa-server container image uses the /var/lib/<container_name>/ directory to store persistent files on the Atomic Host file system. The persistent storage volume maps to the /data/ directory inside the container. Before You Start Note that the container installation uses the same default configuration as a non-container installation using ipa-replica-install . To specify custom configuration, add additional options to the atomic install command used in the procedure below: Atomic options available for the ipa-server container. For a complete list, see the container help page. Identity Management installer options accepted by ipa-replica-install , described in Installing and Uninstalling Identity Management Replicas in the Linux Domain Identity, Authentication, and Policy Guide . You must have an installed server available: either on a bare metal machine, or on another Atomic Host system. Procedure If you want to install a replica against a master server in a container, enable two-way communication to the master container over the ports specified in Installing and Uninstalling an Identity Management Server in the Linux Domain Identity, Authentication, and Policy Guide . Manually create the persistent storage directory for the container at /var/lib/<container_name>/ : Copy the files containing the certificate chain into the directory: See Installing Without a CA in the Linux Domain Identity, Authentication, and Policy Guide for details on the required files. Use the atomic install rhel7/ipa-server publish --hostname fully_qualified_domain_name ipa-replica-install command, include the --server and --domain options to specify the host name and domain name of your Identity Management server, and provide the required certificates from the third-party authority: Note The paths to the certificates include /data/ because the persistent storage volume maps to /data/ inside the container. The container requires its own host name. Use a different host name for the container than the host name of the Atomic Host system. The container's host name must be resolvable via DNS or the /etc/hosts file. Note Installing a server or replica container does not enroll the Atomic Host system itself to the Identity Management domain. If you use the Atomic Host system's host name for the server or replica, you will be unable to enroll the Atomic Host system later. Important Always use the --hostname option with atomic install when installing the server or replica container. Because --hostname is considered an Atomic option in this case, not an Identity Management installer option, use it before the ipa-server-install option. The installation ignores --hostname when used after ipa-server-install . If you are installing a server with integrated DNS, add also the --ip-address option to specify the public IP address of the Atomic Host that is reachable from the network. You can use --ip-address multiple times. Due to a known issue in the interactive replica installation mode , add standard ipa-replica-install options to specify one of the following: A privileged user's credentials. See Example 3.1, "Installation Command Examples" . Random password for bulk enrollment. See Installing a Replica Using a Random Password in the Linux Domain Identity, Authentication, and Policy Guide . Warning Unless you want to install the container for testing purposes only, always use the publish option. Without publish , no ports will be published to the Atomic Host system, and the server will not be reachable from outside the container. 3.5. Steps After Installation To run the container, use the atomic run command: If you specified a name for the container when you installed it: A running ipa-server container works in the same way as in a standard Identity Management deployment on bare-metal or virtual machine systems. For example, you can enroll hosts to the domain or manage the topology using the command-line interface, the web UI, or JSONRPC-API in the same way as RPM-based Identity Management systems.
[ "atomic install [ --name <container_name> ] rhel7/ipa-server [ Atomic options ] [ ipa-server-install | ipa-replica-install ] [ ipa-server-install or ipa-replica-install options ]", "atomic install --name replica-container rhel7/ipa-server publish --hostname replica.example.com ipa-replica-install --server server.example.com --domain example.com --ip-address 2001:DB8::1111 --principal admin --admin-password <admin_password>", "mkdir -p /var/lib/ipa-server", "cp /root/server-*.p12 /var/lib/ipa-server/.", "atomic install --name replica-container rhel7/ipa-server publish --hostname replica.example.com ipa-replica-install --server server.example.com --domain example.com --dirsrv-cert-file=/data/replica-dirsrv-cert.p12 --dirsrv-pin=1234 --http-cert-file=/data/replica-http-cert.p12 --http-pin=1234 --pkinit-cert-file=/data/replica-pkinit-cert.p12 --pkinit-pin=1234", "atomic run rhel7/ipa-server", "atomic run --name replica-container rhel7/ipa-server" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/using_containerized_identity_management_services/deploying-an-identity-management-replica-in-a-container
Chapter 3. PodTemplate [v1]
Chapter 3. PodTemplate [v1] Description PodTemplate describes a template for creating copies of a predefined pod. Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata template object PodTemplateSpec describes the data a pod should have when created from a template 3.1.1. .template Description PodTemplateSpec describes the data a pod should have when created from a template Type object Property Type Description metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PodSpec is a description of a pod. 3.1.2. .template.spec Description PodSpec is a description of a pod. Type object Required containers Property Type Description activeDeadlineSeconds integer Optional duration in seconds the pod may be active on the node relative to StartTime before the system will actively try to mark it failed and kill associated containers. Value must be a positive integer. affinity object Affinity is a group of affinity scheduling rules. automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether a service account token should be automatically mounted. containers array List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated. containers[] object A single application container that you want to run within a pod. dnsConfig object PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy. dnsPolicy string Set DNS policy for the pod. Defaults to "ClusterFirst". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. Possible enum values: - "ClusterFirst" indicates that the pod should use cluster DNS first unless hostNetwork is true, if it is available, then fall back on the default (as determined by kubelet) DNS settings. - "ClusterFirstWithHostNet" indicates that the pod should use cluster DNS first, if it is available, then fall back on the default (as determined by kubelet) DNS settings. - "Default" indicates that the pod should use the default (as determined by kubelet) DNS settings. - "None" indicates that the pod should use empty DNS settings. DNS parameters such as nameservers and search paths should be defined via DNSConfig. enableServiceLinks boolean EnableServiceLinks indicates whether information about services should be injected into pod's environment variables, matching the syntax of Docker links. Optional: Defaults to true. ephemeralContainers array List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. ephemeralContainers[] object An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted. hostAliases array HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. hostAliases[] object HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. hostIPC boolean Use the host's ipc namespace. Optional: Default to false. hostNetwork boolean Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false. hostPID boolean Use the host's pid namespace. Optional: Default to false. hostUsers boolean Use the host's user namespace. Optional: Default to true. If set to true or not present, the pod will be run in the host user namespace, useful for when the pod needs a feature only available to the host user namespace, such as loading a kernel module with CAP_SYS_MODULE. When set to false, a new userns is created for the pod. Setting false is useful for mitigating container breakout vulnerabilities even allowing users to run their containers as root without actually having root privileges on the host. This field is alpha-level and is only honored by servers that enable the UserNamespacesSupport feature. hostname string Specifies the hostname of the Pod If not specified, the pod's hostname will be set to a system-defined value. imagePullSecrets array ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod imagePullSecrets[] object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. initContainers array List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ initContainers[] object A single application container that you want to run within a pod. nodeName string NodeName is a request to schedule this pod onto a specific node. If it is non-empty, the scheduler simply schedules this pod onto that node, assuming that it fits resource requirements. nodeSelector object (string) NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ os object PodOS defines the OS parameters of a pod. overhead object (Quantity) Overhead represents the resource overhead associated with running a pod for a given RuntimeClass. This field will be autopopulated at admission time by the RuntimeClass admission controller. If the RuntimeClass admission controller is enabled, overhead must not be set in Pod create requests. The RuntimeClass admission controller will reject Pod create requests which have the overhead already set. If RuntimeClass is configured and selected in the PodSpec, Overhead will be set to the value defined in the corresponding RuntimeClass, otherwise it will remain unset and treated as zero. More info: https://git.k8s.io/enhancements/keps/sig-node/688-pod-overhead/README.md preemptionPolicy string PreemptionPolicy is the Policy for preempting pods with lower priority. One of Never, PreemptLowerPriority. Defaults to PreemptLowerPriority if unset. Possible enum values: - "Never" means that pod never preempts other pods with lower priority. - "PreemptLowerPriority" means that pod can preempt other pods with lower priority. priority integer The priority value. Various system components use this field to find the priority of the pod. When Priority Admission Controller is enabled, it prevents users from setting this field. The admission controller populates this field from PriorityClassName. The higher the value, the higher the priority. priorityClassName string If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. readinessGates array If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to "True" More info: https://git.k8s.io/enhancements/keps/sig-network/580-pod-readiness-gates readinessGates[] object PodReadinessGate contains the reference to a pod condition resourceClaims array ResourceClaims defines which ResourceClaims must be allocated and reserved before the Pod is allowed to start. The resources will be made available to those containers which consume them by name. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. resourceClaims[] object PodResourceClaim references exactly one ResourceClaim through a ClaimSource. It adds a name to it that uniquely identifies the ResourceClaim inside the Pod. Containers that need access to the ResourceClaim reference it with this name. restartPolicy string Restart policy for all containers within the pod. One of Always, OnFailure, Never. In some contexts, only a subset of those values may be permitted. Default to Always. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy Possible enum values: - "Always" - "Never" - "OnFailure" runtimeClassName string RuntimeClassName refers to a RuntimeClass object in the node.k8s.io group, which should be used to run this pod. If no RuntimeClass resource matches the named class, the pod will not be run. If unset or empty, the "legacy" RuntimeClass will be used, which is an implicit class with an empty definition that uses the default runtime handler. More info: https://git.k8s.io/enhancements/keps/sig-node/585-runtime-class schedulerName string If specified, the pod will be dispatched by specified scheduler. If not specified, the pod will be dispatched by default scheduler. schedulingGates array SchedulingGates is an opaque list of values that if specified will block scheduling the pod. If schedulingGates is not empty, the pod will stay in the SchedulingGated state and the scheduler will not attempt to schedule the pod. SchedulingGates can only be set at pod creation time, and be removed only afterwards. schedulingGates[] object PodSchedulingGate is associated to a Pod to guard its scheduling. securityContext object PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext. serviceAccount string DeprecatedServiceAccount is a deprecated alias for ServiceAccountName. Deprecated: Use serviceAccountName instead. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to use to run this pod. More info: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ setHostnameAsFQDN boolean If true the pod's hostname will be configured as the pod's FQDN, rather than the leaf name (the default). In Linux containers, this means setting the FQDN in the hostname field of the kernel (the nodename field of struct utsname). In Windows containers, this means setting the registry value of hostname for the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters to FQDN. If a pod does not have FQDN, this has no effect. Default to false. shareProcessNamespace boolean Share a single process namespace between all of the containers in a pod. When this is set containers will be able to view and signal processes from other containers in the same pod, and the first process in each container will not be assigned PID 1. HostPID and ShareProcessNamespace cannot both be set. Optional: Default to false. subdomain string If specified, the fully qualified Pod hostname will be "<hostname>.<subdomain>.<pod namespace>.svc.<cluster domain>". If not specified, the pod will not have a domainname at all. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully. May be decreased in delete request. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). If this value is nil, the default grace period will be used instead. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. Defaults to 30 seconds. tolerations array If specified, the pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. topologySpreadConstraints array TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed. topologySpreadConstraints[] object TopologySpreadConstraint specifies how to spread matching pods among the given topology. volumes array List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes volumes[] object Volume represents a named volume in a pod that may be accessed by any container in the pod. 3.1.3. .template.spec.affinity Description Affinity is a group of affinity scheduling rules. Type object Property Type Description nodeAffinity object Node affinity is a group of node affinity scheduling rules. podAffinity object Pod affinity is a group of inter pod affinity scheduling rules. podAntiAffinity object Pod anti affinity is a group of inter pod anti affinity scheduling rules. 3.1.4. .template.spec.affinity.nodeAffinity Description Node affinity is a group of node affinity scheduling rules. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. 3.1.5. .template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 3.1.6. .template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required weight preference Property Type Description preference object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 3.1.7. .template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 3.1.8. .template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 3.1.9. .template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.10. .template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 3.1.11. .template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.12. .template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 3.1.13. .template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 3.1.14. .template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 3.1.15. .template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 3.1.16. .template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.17. .template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 3.1.18. .template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.19. .template.spec.affinity.podAffinity Description Pod affinity is a group of inter pod affinity scheduling rules. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 3.1.20. .template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 3.1.21. .template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required weight podAffinityTerm Property Type Description podAffinityTerm object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 3.1.22. .template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.23. .template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 3.1.24. .template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.25. .template.spec.affinity.podAntiAffinity Description Pod anti affinity is a group of inter pod anti affinity scheduling rules. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 3.1.26. .template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 3.1.27. .template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required weight podAffinityTerm Property Type Description podAffinityTerm object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 3.1.28. .template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.29. .template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 3.1.30. .template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.31. .template.spec.containers Description List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated. Type array 3.1.32. .template.spec.containers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images Possible enum values: - "Always" means that kubelet always attempts to pull the latest image. Container will fail If the pull fails. - "IfNotPresent" means that kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. - "Never" means that kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present lifecycle object Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. livenessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object ResourceRequirements describes the compute resource requirements. restartPolicy string RestartPolicy defines the restart behavior of individual containers in a pod. This field may only be set for init containers, and the only allowed value is "Always". For non-init containers or when this field is not specified, the restart behavior is defined by the Pod's restart policy and the container type. Setting the RestartPolicy as "Always" for the init container will have the following effect: this init container will be continually restarted on exit until all regular containers have terminated. Once all regular containers have completed, all init containers with restartPolicy "Always" will be shut down. This lifecycle differs from normal init containers and is often referred to as a "sidecar" container. Although this init container still starts in the init container sequence, it does not wait for the container to complete before proceeding to the init container. Instead, the init container starts immediately after this init container is started, or after any startupProbe has successfully completed. securityContext object SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. startupProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. Possible enum values: - "FallbackToLogsOnError" will read the most recent contents of the container logs for the container status message when the container exits with an error and the terminationMessagePath has no contents. - "File" is the default behavior and will set the container status message to the contents of the container's terminationMessagePath when the container exits. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 3.1.33. .template.spec.containers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 3.1.34. .template.spec.containers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object EnvVarSource represents a source for the value of an EnvVar. 3.1.35. .template.spec.containers[].env[].valueFrom Description EnvVarSource represents a source for the value of an EnvVar. Type object Property Type Description configMapKeyRef object Selects a key from a ConfigMap. fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format secretKeyRef object SecretKeySelector selects a key of a Secret. 3.1.36. .template.spec.containers[].env[].valueFrom.configMapKeyRef Description Selects a key from a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.37. .template.spec.containers[].env[].valueFrom.fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.38. .template.spec.containers[].env[].valueFrom.resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.39. .template.spec.containers[].env[].valueFrom.secretKeyRef Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.40. .template.spec.containers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 3.1.41. .template.spec.containers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. 3.1.42. .template.spec.containers[].envFrom[].configMapRef Description ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined 3.1.43. .template.spec.containers[].envFrom[].secretRef Description SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined 3.1.44. .template.spec.containers[].lifecycle Description Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. Type object Property Type Description postStart object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. preStop object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. 3.1.45. .template.spec.containers[].lifecycle.postStart Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. sleep object SleepAction describes a "sleep" action. tcpSocket object TCPSocketAction describes an action based on opening a socket 3.1.46. .template.spec.containers[].lifecycle.postStart.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.47. .template.spec.containers[].lifecycle.postStart.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.48. .template.spec.containers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.49. .template.spec.containers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.50. .template.spec.containers[].lifecycle.postStart.sleep Description SleepAction describes a "sleep" action. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 3.1.51. .template.spec.containers[].lifecycle.postStart.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.52. .template.spec.containers[].lifecycle.preStop Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. sleep object SleepAction describes a "sleep" action. tcpSocket object TCPSocketAction describes an action based on opening a socket 3.1.53. .template.spec.containers[].lifecycle.preStop.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.54. .template.spec.containers[].lifecycle.preStop.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.55. .template.spec.containers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.56. .template.spec.containers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.57. .template.spec.containers[].lifecycle.preStop.sleep Description SleepAction describes a "sleep" action. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 3.1.58. .template.spec.containers[].lifecycle.preStop.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.59. .template.spec.containers[].livenessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.60. .template.spec.containers[].livenessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.61. .template.spec.containers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.62. .template.spec.containers[].livenessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.63. .template.spec.containers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.64. .template.spec.containers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.65. .template.spec.containers[].livenessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.66. .template.spec.containers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 3.1.67. .template.spec.containers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 3.1.68. .template.spec.containers[].readinessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.69. .template.spec.containers[].readinessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.70. .template.spec.containers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.71. .template.spec.containers[].readinessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.72. .template.spec.containers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.73. .template.spec.containers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.74. .template.spec.containers[].readinessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.75. .template.spec.containers[].resizePolicy Description Resources resize policy for the container. Type array 3.1.76. .template.spec.containers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 3.1.77. .template.spec.containers[].resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.78. .template.spec.containers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 3.1.79. .template.spec.containers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 3.1.80. .template.spec.containers[].securityContext Description SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. appArmorProfile object AppArmorProfile defines a pod or container's AppArmor settings. capabilities object Adds and removes POSIX capabilities from running containers. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. Possible enum values: - "Default" uses the container runtime defaults for readonly and masked paths for /proc. Most container runtimes mask certain paths in /proc to avoid accidental security exposure of special devices or information. - "Unmasked" bypasses the default masking behavior of the container runtime and ensures the newly created /proc the container stays in tact with no modifications. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object SELinuxOptions are the labels to be applied to the container seccompProfile object SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. windowsOptions object WindowsSecurityContextOptions contain Windows-specific options and credentials. 3.1.81. .template.spec.containers[].securityContext.appArmorProfile Description AppArmorProfile defines a pod or container's AppArmor settings. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile loaded on the node that should be used. The profile must be preconfigured on the node to work. Must match the loaded name of the profile. Must be set if and only if type is "Localhost". type string type indicates which kind of AppArmor profile will be applied. Valid options are: Localhost - a profile pre-loaded on the node. RuntimeDefault - the container runtime's default profile. Unconfined - no AppArmor enforcement. Possible enum values: - "Localhost" indicates that a profile pre-loaded on the node should be used. - "RuntimeDefault" indicates that the container runtime's default AppArmor profile should be used. - "Unconfined" indicates that no AppArmor profile should be enforced. 3.1.82. .template.spec.containers[].securityContext.capabilities Description Adds and removes POSIX capabilities from running containers. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 3.1.83. .template.spec.containers[].securityContext.seLinuxOptions Description SELinuxOptions are the labels to be applied to the container Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.84. .template.spec.containers[].securityContext.seccompProfile Description SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to <kubelet-root-dir>/seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined). 3.1.85. .template.spec.containers[].securityContext.windowsOptions Description WindowsSecurityContextOptions contain Windows-specific options and credentials. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.86. .template.spec.containers[].startupProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.87. .template.spec.containers[].startupProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.88. .template.spec.containers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.89. .template.spec.containers[].startupProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.90. .template.spec.containers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.91. .template.spec.containers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.92. .template.spec.containers[].startupProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.93. .template.spec.containers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 3.1.94. .template.spec.containers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required name devicePath Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 3.1.95. .template.spec.containers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 3.1.96. .template.spec.containers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required name mountPath Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. When RecursiveReadOnly is set to IfPossible or to Enabled, MountPropagation must be None or unspecified (which defaults to None). Possible enum values: - "Bidirectional" means that the volume in a container will receive new mounts from the host or other containers, and its own mounts will be propagated from the container to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rshared" in Linux terminology). - "HostToContainer" means that the volume in a container will receive new mounts from the host or other containers, but filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rslave" in Linux terminology). - "None" means that the volume in a container will not receive new mounts from the host or other containers, and filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode corresponds to "private" in Linux terminology. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. recursiveReadOnly string RecursiveReadOnly specifies whether read-only mounts should be handled recursively. If ReadOnly is false, this field has no meaning and must be unspecified. If ReadOnly is true, and this field is set to Disabled, the mount is not made recursively read-only. If this field is set to IfPossible, the mount is made recursively read-only, if it is supported by the container runtime. If this field is set to Enabled, the mount is made recursively read-only if it is supported by the container runtime, otherwise the pod will not be started and an error will be generated to indicate the reason. If this field is set to IfPossible or Enabled, MountPropagation must be set to None (or be unspecified, which defaults to None). If this field is not specified, it is treated as an equivalent of Disabled. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 3.1.97. .template.spec.dnsConfig Description PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy. Type object Property Type Description nameservers array (string) A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed. options array A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. options[] object PodDNSConfigOption defines DNS resolver options of a pod. searches array (string) A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed. 3.1.98. .template.spec.dnsConfig.options Description A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. Type array 3.1.99. .template.spec.dnsConfig.options[] Description PodDNSConfigOption defines DNS resolver options of a pod. Type object Property Type Description name string Required. value string 3.1.100. .template.spec.ephemeralContainers Description List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. Type array 3.1.101. .template.spec.ephemeralContainers[] Description An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images Possible enum values: - "Always" means that kubelet always attempts to pull the latest image. Container will fail If the pull fails. - "IfNotPresent" means that kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. - "Never" means that kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present lifecycle object Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. livenessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. name string Name of the ephemeral container specified as a DNS_LABEL. This name must be unique among all containers, init containers and ephemeral containers. ports array Ports are not allowed for ephemeral containers. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object ResourceRequirements describes the compute resource requirements. restartPolicy string Restart policy for the container to manage the restart behavior of each container within a pod. This may only be set for init containers. You cannot set this field on ephemeral containers. securityContext object SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. startupProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false targetContainerName string If set, the name of the container from PodSpec that this ephemeral container targets. The ephemeral container will be run in the namespaces (IPC, PID, etc) of this container. If not set then the ephemeral container uses the namespaces configured in the Pod spec. The container runtime must implement support for this feature. If the runtime does not support namespace targeting then the result of setting this field is undefined. terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. Possible enum values: - "FallbackToLogsOnError" will read the most recent contents of the container logs for the container status message when the container exits with an error and the terminationMessagePath has no contents. - "File" is the default behavior and will set the container status message to the contents of the container's terminationMessagePath when the container exits. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 3.1.102. .template.spec.ephemeralContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 3.1.103. .template.spec.ephemeralContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object EnvVarSource represents a source for the value of an EnvVar. 3.1.104. .template.spec.ephemeralContainers[].env[].valueFrom Description EnvVarSource represents a source for the value of an EnvVar. Type object Property Type Description configMapKeyRef object Selects a key from a ConfigMap. fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format secretKeyRef object SecretKeySelector selects a key of a Secret. 3.1.105. .template.spec.ephemeralContainers[].env[].valueFrom.configMapKeyRef Description Selects a key from a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.106. .template.spec.ephemeralContainers[].env[].valueFrom.fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.107. .template.spec.ephemeralContainers[].env[].valueFrom.resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.108. .template.spec.ephemeralContainers[].env[].valueFrom.secretKeyRef Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.109. .template.spec.ephemeralContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 3.1.110. .template.spec.ephemeralContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. 3.1.111. .template.spec.ephemeralContainers[].envFrom[].configMapRef Description ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined 3.1.112. .template.spec.ephemeralContainers[].envFrom[].secretRef Description SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined 3.1.113. .template.spec.ephemeralContainers[].lifecycle Description Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. Type object Property Type Description postStart object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. preStop object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. 3.1.114. .template.spec.ephemeralContainers[].lifecycle.postStart Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. sleep object SleepAction describes a "sleep" action. tcpSocket object TCPSocketAction describes an action based on opening a socket 3.1.115. .template.spec.ephemeralContainers[].lifecycle.postStart.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.116. .template.spec.ephemeralContainers[].lifecycle.postStart.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.117. .template.spec.ephemeralContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.118. .template.spec.ephemeralContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.119. .template.spec.ephemeralContainers[].lifecycle.postStart.sleep Description SleepAction describes a "sleep" action. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 3.1.120. .template.spec.ephemeralContainers[].lifecycle.postStart.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.121. .template.spec.ephemeralContainers[].lifecycle.preStop Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. sleep object SleepAction describes a "sleep" action. tcpSocket object TCPSocketAction describes an action based on opening a socket 3.1.122. .template.spec.ephemeralContainers[].lifecycle.preStop.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.123. .template.spec.ephemeralContainers[].lifecycle.preStop.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.124. .template.spec.ephemeralContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.125. .template.spec.ephemeralContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.126. .template.spec.ephemeralContainers[].lifecycle.preStop.sleep Description SleepAction describes a "sleep" action. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 3.1.127. .template.spec.ephemeralContainers[].lifecycle.preStop.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.128. .template.spec.ephemeralContainers[].livenessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.129. .template.spec.ephemeralContainers[].livenessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.130. .template.spec.ephemeralContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.131. .template.spec.ephemeralContainers[].livenessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.132. .template.spec.ephemeralContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.133. .template.spec.ephemeralContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.134. .template.spec.ephemeralContainers[].livenessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.135. .template.spec.ephemeralContainers[].ports Description Ports are not allowed for ephemeral containers. Type array 3.1.136. .template.spec.ephemeralContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 3.1.137. .template.spec.ephemeralContainers[].readinessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.138. .template.spec.ephemeralContainers[].readinessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.139. .template.spec.ephemeralContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.140. .template.spec.ephemeralContainers[].readinessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.141. .template.spec.ephemeralContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.142. .template.spec.ephemeralContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.143. .template.spec.ephemeralContainers[].readinessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.144. .template.spec.ephemeralContainers[].resizePolicy Description Resources resize policy for the container. Type array 3.1.145. .template.spec.ephemeralContainers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 3.1.146. .template.spec.ephemeralContainers[].resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.147. .template.spec.ephemeralContainers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 3.1.148. .template.spec.ephemeralContainers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 3.1.149. .template.spec.ephemeralContainers[].securityContext Description SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. appArmorProfile object AppArmorProfile defines a pod or container's AppArmor settings. capabilities object Adds and removes POSIX capabilities from running containers. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. Possible enum values: - "Default" uses the container runtime defaults for readonly and masked paths for /proc. Most container runtimes mask certain paths in /proc to avoid accidental security exposure of special devices or information. - "Unmasked" bypasses the default masking behavior of the container runtime and ensures the newly created /proc the container stays in tact with no modifications. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object SELinuxOptions are the labels to be applied to the container seccompProfile object SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. windowsOptions object WindowsSecurityContextOptions contain Windows-specific options and credentials. 3.1.150. .template.spec.ephemeralContainers[].securityContext.appArmorProfile Description AppArmorProfile defines a pod or container's AppArmor settings. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile loaded on the node that should be used. The profile must be preconfigured on the node to work. Must match the loaded name of the profile. Must be set if and only if type is "Localhost". type string type indicates which kind of AppArmor profile will be applied. Valid options are: Localhost - a profile pre-loaded on the node. RuntimeDefault - the container runtime's default profile. Unconfined - no AppArmor enforcement. Possible enum values: - "Localhost" indicates that a profile pre-loaded on the node should be used. - "RuntimeDefault" indicates that the container runtime's default AppArmor profile should be used. - "Unconfined" indicates that no AppArmor profile should be enforced. 3.1.151. .template.spec.ephemeralContainers[].securityContext.capabilities Description Adds and removes POSIX capabilities from running containers. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 3.1.152. .template.spec.ephemeralContainers[].securityContext.seLinuxOptions Description SELinuxOptions are the labels to be applied to the container Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.153. .template.spec.ephemeralContainers[].securityContext.seccompProfile Description SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to <kubelet-root-dir>/seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined). 3.1.154. .template.spec.ephemeralContainers[].securityContext.windowsOptions Description WindowsSecurityContextOptions contain Windows-specific options and credentials. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.155. .template.spec.ephemeralContainers[].startupProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.156. .template.spec.ephemeralContainers[].startupProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.157. .template.spec.ephemeralContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.158. .template.spec.ephemeralContainers[].startupProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.159. .template.spec.ephemeralContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.160. .template.spec.ephemeralContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.161. .template.spec.ephemeralContainers[].startupProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.162. .template.spec.ephemeralContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 3.1.163. .template.spec.ephemeralContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required name devicePath Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 3.1.164. .template.spec.ephemeralContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. Cannot be updated. Type array 3.1.165. .template.spec.ephemeralContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required name mountPath Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. When RecursiveReadOnly is set to IfPossible or to Enabled, MountPropagation must be None or unspecified (which defaults to None). Possible enum values: - "Bidirectional" means that the volume in a container will receive new mounts from the host or other containers, and its own mounts will be propagated from the container to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rshared" in Linux terminology). - "HostToContainer" means that the volume in a container will receive new mounts from the host or other containers, but filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rslave" in Linux terminology). - "None" means that the volume in a container will not receive new mounts from the host or other containers, and filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode corresponds to "private" in Linux terminology. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. recursiveReadOnly string RecursiveReadOnly specifies whether read-only mounts should be handled recursively. If ReadOnly is false, this field has no meaning and must be unspecified. If ReadOnly is true, and this field is set to Disabled, the mount is not made recursively read-only. If this field is set to IfPossible, the mount is made recursively read-only, if it is supported by the container runtime. If this field is set to Enabled, the mount is made recursively read-only if it is supported by the container runtime, otherwise the pod will not be started and an error will be generated to indicate the reason. If this field is set to IfPossible or Enabled, MountPropagation must be set to None (or be unspecified, which defaults to None). If this field is not specified, it is treated as an equivalent of Disabled. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 3.1.166. .template.spec.hostAliases Description HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. Type array 3.1.167. .template.spec.hostAliases[] Description HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Type object Required ip Property Type Description hostnames array (string) Hostnames for the above IP address. ip string IP address of the host file entry. 3.1.168. .template.spec.imagePullSecrets Description ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod Type array 3.1.169. .template.spec.imagePullSecrets[] Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 3.1.170. .template.spec.initContainers Description List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ Type array 3.1.171. .template.spec.initContainers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images Possible enum values: - "Always" means that kubelet always attempts to pull the latest image. Container will fail If the pull fails. - "IfNotPresent" means that kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. - "Never" means that kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present lifecycle object Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. livenessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object ResourceRequirements describes the compute resource requirements. restartPolicy string RestartPolicy defines the restart behavior of individual containers in a pod. This field may only be set for init containers, and the only allowed value is "Always". For non-init containers or when this field is not specified, the restart behavior is defined by the Pod's restart policy and the container type. Setting the RestartPolicy as "Always" for the init container will have the following effect: this init container will be continually restarted on exit until all regular containers have terminated. Once all regular containers have completed, all init containers with restartPolicy "Always" will be shut down. This lifecycle differs from normal init containers and is often referred to as a "sidecar" container. Although this init container still starts in the init container sequence, it does not wait for the container to complete before proceeding to the init container. Instead, the init container starts immediately after this init container is started, or after any startupProbe has successfully completed. securityContext object SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. startupProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. Possible enum values: - "FallbackToLogsOnError" will read the most recent contents of the container logs for the container status message when the container exits with an error and the terminationMessagePath has no contents. - "File" is the default behavior and will set the container status message to the contents of the container's terminationMessagePath when the container exits. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 3.1.172. .template.spec.initContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 3.1.173. .template.spec.initContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object EnvVarSource represents a source for the value of an EnvVar. 3.1.174. .template.spec.initContainers[].env[].valueFrom Description EnvVarSource represents a source for the value of an EnvVar. Type object Property Type Description configMapKeyRef object Selects a key from a ConfigMap. fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format secretKeyRef object SecretKeySelector selects a key of a Secret. 3.1.175. .template.spec.initContainers[].env[].valueFrom.configMapKeyRef Description Selects a key from a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.176. .template.spec.initContainers[].env[].valueFrom.fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.177. .template.spec.initContainers[].env[].valueFrom.resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.178. .template.spec.initContainers[].env[].valueFrom.secretKeyRef Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.179. .template.spec.initContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 3.1.180. .template.spec.initContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. 3.1.181. .template.spec.initContainers[].envFrom[].configMapRef Description ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined 3.1.182. .template.spec.initContainers[].envFrom[].secretRef Description SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined 3.1.183. .template.spec.initContainers[].lifecycle Description Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. Type object Property Type Description postStart object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. preStop object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. 3.1.184. .template.spec.initContainers[].lifecycle.postStart Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. sleep object SleepAction describes a "sleep" action. tcpSocket object TCPSocketAction describes an action based on opening a socket 3.1.185. .template.spec.initContainers[].lifecycle.postStart.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.186. .template.spec.initContainers[].lifecycle.postStart.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.187. .template.spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.188. .template.spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.189. .template.spec.initContainers[].lifecycle.postStart.sleep Description SleepAction describes a "sleep" action. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 3.1.190. .template.spec.initContainers[].lifecycle.postStart.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.191. .template.spec.initContainers[].lifecycle.preStop Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. sleep object SleepAction describes a "sleep" action. tcpSocket object TCPSocketAction describes an action based on opening a socket 3.1.192. .template.spec.initContainers[].lifecycle.preStop.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.193. .template.spec.initContainers[].lifecycle.preStop.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.194. .template.spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.195. .template.spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.196. .template.spec.initContainers[].lifecycle.preStop.sleep Description SleepAction describes a "sleep" action. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 3.1.197. .template.spec.initContainers[].lifecycle.preStop.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.198. .template.spec.initContainers[].livenessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.199. .template.spec.initContainers[].livenessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.200. .template.spec.initContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.201. .template.spec.initContainers[].livenessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.202. .template.spec.initContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.203. .template.spec.initContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.204. .template.spec.initContainers[].livenessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.205. .template.spec.initContainers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 3.1.206. .template.spec.initContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 3.1.207. .template.spec.initContainers[].readinessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.208. .template.spec.initContainers[].readinessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.209. .template.spec.initContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.210. .template.spec.initContainers[].readinessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.211. .template.spec.initContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.212. .template.spec.initContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.213. .template.spec.initContainers[].readinessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.214. .template.spec.initContainers[].resizePolicy Description Resources resize policy for the container. Type array 3.1.215. .template.spec.initContainers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 3.1.216. .template.spec.initContainers[].resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.217. .template.spec.initContainers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 3.1.218. .template.spec.initContainers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 3.1.219. .template.spec.initContainers[].securityContext Description SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. appArmorProfile object AppArmorProfile defines a pod or container's AppArmor settings. capabilities object Adds and removes POSIX capabilities from running containers. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. Possible enum values: - "Default" uses the container runtime defaults for readonly and masked paths for /proc. Most container runtimes mask certain paths in /proc to avoid accidental security exposure of special devices or information. - "Unmasked" bypasses the default masking behavior of the container runtime and ensures the newly created /proc the container stays in tact with no modifications. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object SELinuxOptions are the labels to be applied to the container seccompProfile object SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. windowsOptions object WindowsSecurityContextOptions contain Windows-specific options and credentials. 3.1.220. .template.spec.initContainers[].securityContext.appArmorProfile Description AppArmorProfile defines a pod or container's AppArmor settings. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile loaded on the node that should be used. The profile must be preconfigured on the node to work. Must match the loaded name of the profile. Must be set if and only if type is "Localhost". type string type indicates which kind of AppArmor profile will be applied. Valid options are: Localhost - a profile pre-loaded on the node. RuntimeDefault - the container runtime's default profile. Unconfined - no AppArmor enforcement. Possible enum values: - "Localhost" indicates that a profile pre-loaded on the node should be used. - "RuntimeDefault" indicates that the container runtime's default AppArmor profile should be used. - "Unconfined" indicates that no AppArmor profile should be enforced. 3.1.221. .template.spec.initContainers[].securityContext.capabilities Description Adds and removes POSIX capabilities from running containers. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 3.1.222. .template.spec.initContainers[].securityContext.seLinuxOptions Description SELinuxOptions are the labels to be applied to the container Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.223. .template.spec.initContainers[].securityContext.seccompProfile Description SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to <kubelet-root-dir>/seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined). 3.1.224. .template.spec.initContainers[].securityContext.windowsOptions Description WindowsSecurityContextOptions contain Windows-specific options and credentials. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.225. .template.spec.initContainers[].startupProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.226. .template.spec.initContainers[].startupProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.227. .template.spec.initContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.228. .template.spec.initContainers[].startupProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 3.1.229. .template.spec.initContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.230. .template.spec.initContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.231. .template.spec.initContainers[].startupProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.232. .template.spec.initContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 3.1.233. .template.spec.initContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required name devicePath Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 3.1.234. .template.spec.initContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 3.1.235. .template.spec.initContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required name mountPath Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. When RecursiveReadOnly is set to IfPossible or to Enabled, MountPropagation must be None or unspecified (which defaults to None). Possible enum values: - "Bidirectional" means that the volume in a container will receive new mounts from the host or other containers, and its own mounts will be propagated from the container to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rshared" in Linux terminology). - "HostToContainer" means that the volume in a container will receive new mounts from the host or other containers, but filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rslave" in Linux terminology). - "None" means that the volume in a container will not receive new mounts from the host or other containers, and filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode corresponds to "private" in Linux terminology. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. recursiveReadOnly string RecursiveReadOnly specifies whether read-only mounts should be handled recursively. If ReadOnly is false, this field has no meaning and must be unspecified. If ReadOnly is true, and this field is set to Disabled, the mount is not made recursively read-only. If this field is set to IfPossible, the mount is made recursively read-only, if it is supported by the container runtime. If this field is set to Enabled, the mount is made recursively read-only if it is supported by the container runtime, otherwise the pod will not be started and an error will be generated to indicate the reason. If this field is set to IfPossible or Enabled, MountPropagation must be set to None (or be unspecified, which defaults to None). If this field is not specified, it is treated as an equivalent of Disabled. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 3.1.236. .template.spec.os Description PodOS defines the OS parameters of a pod. Type object Required name Property Type Description name string Name is the name of the operating system. The currently supported values are linux and windows. Additional value may be defined in future and can be one of: https://github.com/opencontainers/runtime-spec/blob/master/config.md#platform-specific-configuration Clients should expect to handle additional values and treat unrecognized values in this field as os: null 3.1.237. .template.spec.readinessGates Description If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to "True" More info: https://git.k8s.io/enhancements/keps/sig-network/580-pod-readiness-gates Type array 3.1.238. .template.spec.readinessGates[] Description PodReadinessGate contains the reference to a pod condition Type object Required conditionType Property Type Description conditionType string ConditionType refers to a condition in the pod's condition list with matching type. 3.1.239. .template.spec.resourceClaims Description ResourceClaims defines which ResourceClaims must be allocated and reserved before the Pod is allowed to start. The resources will be made available to those containers which consume them by name. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. Type array 3.1.240. .template.spec.resourceClaims[] Description PodResourceClaim references exactly one ResourceClaim through a ClaimSource. It adds a name to it that uniquely identifies the ResourceClaim inside the Pod. Containers that need access to the ResourceClaim reference it with this name. Type object Required name Property Type Description name string Name uniquely identifies this resource claim inside the pod. This must be a DNS_LABEL. source object ClaimSource describes a reference to a ResourceClaim. Exactly one of these fields should be set. Consumers of this type must treat an empty object as if it has an unknown value. 3.1.241. .template.spec.resourceClaims[].source Description ClaimSource describes a reference to a ResourceClaim. Exactly one of these fields should be set. Consumers of this type must treat an empty object as if it has an unknown value. Type object Property Type Description resourceClaimName string ResourceClaimName is the name of a ResourceClaim object in the same namespace as this pod. resourceClaimTemplateName string ResourceClaimTemplateName is the name of a ResourceClaimTemplate object in the same namespace as this pod. The template will be used to create a new ResourceClaim, which will be bound to this pod. When this pod is deleted, the ResourceClaim will also be deleted. The pod name and resource name, along with a generated component, will be used to form a unique name for the ResourceClaim, which will be recorded in pod.status.resourceClaimStatuses. This field is immutable and no changes will be made to the corresponding ResourceClaim by the control plane after creating the ResourceClaim. 3.1.242. .template.spec.schedulingGates Description SchedulingGates is an opaque list of values that if specified will block scheduling the pod. If schedulingGates is not empty, the pod will stay in the SchedulingGated state and the scheduler will not attempt to schedule the pod. SchedulingGates can only be set at pod creation time, and be removed only afterwards. Type array 3.1.243. .template.spec.schedulingGates[] Description PodSchedulingGate is associated to a Pod to guard its scheduling. Type object Required name Property Type Description name string Name of the scheduling gate. Each scheduling gate must have a unique name field. 3.1.244. .template.spec.securityContext Description PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext. Type object Property Type Description appArmorProfile object AppArmorProfile defines a pod or container's AppArmor settings. fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. Possible enum values: - "Always" indicates that volume's ownership and permissions should always be changed whenever volume is mounted inside a Pod. This the default behavior. - "OnRootMismatch" indicates that volume's ownership and permissions will be changed only when permission and ownership of root directory does not match with expected permissions on the volume. This can help shorten the time it takes to change ownership and permissions of a volume. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object SELinuxOptions are the labels to be applied to the container seccompProfile object SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. supplementalGroups array (integer) A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows. sysctls array Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. sysctls[] object Sysctl defines a kernel parameter to be set windowsOptions object WindowsSecurityContextOptions contain Windows-specific options and credentials. 3.1.245. .template.spec.securityContext.appArmorProfile Description AppArmorProfile defines a pod or container's AppArmor settings. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile loaded on the node that should be used. The profile must be preconfigured on the node to work. Must match the loaded name of the profile. Must be set if and only if type is "Localhost". type string type indicates which kind of AppArmor profile will be applied. Valid options are: Localhost - a profile pre-loaded on the node. RuntimeDefault - the container runtime's default profile. Unconfined - no AppArmor enforcement. Possible enum values: - "Localhost" indicates that a profile pre-loaded on the node should be used. - "RuntimeDefault" indicates that the container runtime's default AppArmor profile should be used. - "Unconfined" indicates that no AppArmor profile should be enforced. 3.1.246. .template.spec.securityContext.seLinuxOptions Description SELinuxOptions are the labels to be applied to the container Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.247. .template.spec.securityContext.seccompProfile Description SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to <kubelet-root-dir>/seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined). 3.1.248. .template.spec.securityContext.sysctls Description Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. Type array 3.1.249. .template.spec.securityContext.sysctls[] Description Sysctl defines a kernel parameter to be set Type object Required name value Property Type Description name string Name of a property to set value string Value of a property to set 3.1.250. .template.spec.securityContext.windowsOptions Description WindowsSecurityContextOptions contain Windows-specific options and credentials. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.251. .template.spec.tolerations Description If specified, the pod's tolerations. Type array 3.1.252. .template.spec.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. Possible enum values: - "NoExecute" Evict any already-running pods that do not tolerate the taint. Currently enforced by NodeController. - "NoSchedule" Do not allow new pods to schedule onto the node unless they tolerate the taint, but allow all pods submitted to Kubelet without going through the scheduler to start, and allow all already-running pods to continue running. Enforced by the scheduler. - "PreferNoSchedule" Like TaintEffectNoSchedule, but the scheduler tries not to schedule new pods onto the node, rather than prohibiting new pods from scheduling onto the node entirely. Enforced by the scheduler. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. Possible enum values: - "Equal" - "Exists" tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 3.1.253. .template.spec.topologySpreadConstraints Description TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed. Type array 3.1.254. .template.spec.topologySpreadConstraints[] Description TopologySpreadConstraint specifies how to spread matching pods among the given topology. Type object Required maxSkew topologyKey whenUnsatisfiable Property Type Description labelSelector LabelSelector LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. MatchLabelKeys cannot be set when LabelSelector isn't set. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule , it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway , it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed. minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. Possible enum values: - "Honor" means use this scheduling directive when calculating pod topology spread skew. - "Ignore" means ignore this scheduling directive when calculating pod topology spread skew. nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. Possible enum values: - "Honor" means use this scheduling directive when calculating pod topology spread skew. - "Ignore" means ignore this scheduling directive when calculating pod topology spread skew. topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it more imbalanced. It's a required field. Possible enum values: - "DoNotSchedule" instructs the scheduler not to schedule the pod when constraints are not satisfied. - "ScheduleAnyway" instructs the scheduler to schedule the pod even if constraints are not satisfied. 3.1.255. .template.spec.volumes Description List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes Type array 3.1.256. .template.spec.volumes[] Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Property Type Description awsElasticBlockStore object Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. azureDisk object AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object AzureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs object Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. cinder object Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. configMap object Adapts a ConfigMap into a volume. The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling. csi object Represents a source location of a volume to mount, managed by an external CSI driver downwardAPI object DownwardAPIVolumeSource represents a volume containing downward API info. Downward API volumes support ownership management and SELinux relabeling. emptyDir object Represents an empty directory for a pod. Empty directory volumes support ownership management and SELinux relabeling. ephemeral object Represents an ephemeral volume that is handled by a normal storage driver. fc object Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. flexVolume object FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker object Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. gcePersistentDisk object Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. gitRepo object Represents a volume that is populated with the contents of a git repository. Git repo volumes do not support ownership management. Git repo volumes support SELinux relabeling. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs object Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling. hostPath object Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling. iscsi object Represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs object Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling. persistentVolumeClaim object PersistentVolumeClaimVolumeSource references the user's PVC in the same namespace. This volume finds the bound PV and mounts that volume for the pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around another type of volume that is owned by someone else (the system). photonPersistentDisk object Represents a Photon Controller persistent disk resource. portworxVolume object PortworxVolumeSource represents a Portworx volume resource. projected object Represents a projected volume source quobyte object Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling. rbd object Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling. scaleIO object ScaleIOVolumeSource represents a persistent ScaleIO volume secret object Adapts a Secret into a volume. The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling. storageos object Represents a StorageOS persistent volume resource. vsphereVolume object Represents a vSphere volume resource. 3.1.257. .template.spec.volumes[].awsElasticBlockStore Description Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 3.1.258. .template.spec.volumes[].azureDisk Description AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. Possible enum values: - "None" - "ReadOnly" - "ReadWrite" diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared Possible enum values: - "Dedicated" - "Managed" - "Shared" readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 3.1.259. .template.spec.volumes[].azureFile Description AzureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 3.1.260. .template.spec.volumes[].cephfs Description Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 3.1.261. .template.spec.volumes[].cephfs.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 3.1.262. .template.spec.volumes[].cinder Description Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. Type object Required volumeID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 3.1.263. .template.spec.volumes[].cinder.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 3.1.264. .template.spec.volumes[].configMap Description Adapts a ConfigMap into a volume. The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling. Type object Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional specify whether the ConfigMap or its keys must be defined 3.1.265. .template.spec.volumes[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.266. .template.spec.volumes[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.267. .template.spec.volumes[].csi Description Represents a source location of a volume to mount, managed by an external CSI driver Type object Required driver Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 3.1.268. .template.spec.volumes[].csi.nodePublishSecretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 3.1.269. .template.spec.volumes[].downwardAPI Description DownwardAPIVolumeSource represents a volume containing downward API info. Downward API volumes support ownership management and SELinux relabeling. Type object Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array Items is a list of downward API volume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 3.1.270. .template.spec.volumes[].downwardAPI.items Description Items is a list of downward API volume file Type array 3.1.271. .template.spec.volumes[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format 3.1.272. .template.spec.volumes[].downwardAPI.items[].fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.273. .template.spec.volumes[].downwardAPI.items[].resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.274. .template.spec.volumes[].emptyDir Description Represents an empty directory for a pod. Empty directory volumes support ownership management and SELinux relabeling. Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit Quantity sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 3.1.275. .template.spec.volumes[].ephemeral Description Represents an ephemeral volume that is handled by a normal storage driver. Type object Property Type Description volumeClaimTemplate object PersistentVolumeClaimTemplate is used to produce PersistentVolumeClaim objects as part of an EphemeralVolumeSource. 3.1.276. .template.spec.volumes[].ephemeral.volumeClaimTemplate Description PersistentVolumeClaimTemplate is used to produce PersistentVolumeClaim objects as part of an EphemeralVolumeSource. Type object Required spec Property Type Description metadata ObjectMeta May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes 3.1.277. .template.spec.volumes[].ephemeral.volumeClaimTemplate.spec Description PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object VolumeResourceRequirements describes the storage resource requirements for a volume. selector LabelSelector selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeAttributesClassName string volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it's not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. Possible enum values: - "Block" means the volume will not be formatted with a filesystem and will remain a raw block device. - "Filesystem" means the volume will be or is formatted with a filesystem. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 3.1.278. .template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSource Description TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 3.1.279. .template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 3.1.280. .template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources Description VolumeResourceRequirements describes the storage resource requirements for a volume. Type object Property Type Description limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.281. .template.spec.volumes[].fc Description Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 3.1.282. .template.spec.volumes[].flexVolume Description FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. 3.1.283. .template.spec.volumes[].flexVolume.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 3.1.284. .template.spec.volumes[].flocker Description Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 3.1.285. .template.spec.volumes[].gcePersistentDisk Description Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 3.1.286. .template.spec.volumes[].gitRepo Description Represents a volume that is populated with the contents of a git repository. Git repo volumes do not support ownership management. Git repo volumes support SELinux relabeling. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 3.1.287. .template.spec.volumes[].glusterfs Description Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling. Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 3.1.288. .template.spec.volumes[].hostPath Description Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath Possible enum values: - "" For backwards compatible, leave it empty if unset - "BlockDevice" A block device must exist at the given path - "CharDevice" A character device must exist at the given path - "Directory" A directory must exist at the given path - "DirectoryOrCreate" If nothing exists at the given path, an empty directory will be created there as needed with file mode 0755, having the same group and ownership with Kubelet. - "File" A file must exist at the given path - "FileOrCreate" If nothing exists at the given path, an empty file will be created there as needed with file mode 0644, having the same group and ownership with Kubelet. - "Socket" A UNIX socket must exist at the given path 3.1.289. .template.spec.volumes[].iscsi Description Represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. Type object Required targetPortal iqn lun Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 3.1.290. .template.spec.volumes[].iscsi.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 3.1.291. .template.spec.volumes[].nfs Description Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling. Type object Required server path Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 3.1.292. .template.spec.volumes[].persistentVolumeClaim Description PersistentVolumeClaimVolumeSource references the user's PVC in the same namespace. This volume finds the bound PV and mounts that volume for the pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around another type of volume that is owned by someone else (the system). Type object Required claimName Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 3.1.293. .template.spec.volumes[].photonPersistentDisk Description Represents a Photon Controller persistent disk resource. Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 3.1.294. .template.spec.volumes[].portworxVolume Description PortworxVolumeSource represents a Portworx volume resource. Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 3.1.295. .template.spec.volumes[].projected Description Represents a projected volume source Type object Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array sources is the list of volume projections sources[] object Projection that may be projected along with other supported volume types 3.1.296. .template.spec.volumes[].projected.sources Description sources is the list of volume projections Type array 3.1.297. .template.spec.volumes[].projected.sources[] Description Projection that may be projected along with other supported volume types Type object Property Type Description clusterTrustBundle object ClusterTrustBundleProjection describes how to select a set of ClusterTrustBundle objects and project their contents into the pod filesystem. configMap object Adapts a ConfigMap into a projected volume. The contents of the target ConfigMap's Data field will be presented in a projected volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. Note that this is identical to a configmap volume source without the default mode. downwardAPI object Represents downward API info for projecting into a projected volume. Note that this is identical to a downwardAPI volume source without the default mode. secret object Adapts a secret into a projected volume. The contents of the target Secret's Data field will be presented in a projected volume as files using the keys in the Data field as the file names. Note that this is identical to a secret volume source without the default mode. serviceAccountToken object ServiceAccountTokenProjection represents a projected service account token volume. This projection can be used to insert a service account token into the pods runtime filesystem for use against APIs (Kubernetes API Server or otherwise). 3.1.298. .template.spec.volumes[].projected.sources[].clusterTrustBundle Description ClusterTrustBundleProjection describes how to select a set of ClusterTrustBundle objects and project their contents into the pod filesystem. Type object Required path Property Type Description labelSelector LabelSelector Select all ClusterTrustBundles that match this label selector. Only has effect if signerName is set. Mutually-exclusive with name. If unset, interpreted as "match nothing". If set but empty, interpreted as "match everything". name string Select a single ClusterTrustBundle by object name. Mutually-exclusive with signerName and labelSelector. optional boolean If true, don't block pod startup if the referenced ClusterTrustBundle(s) aren't available. If using name, then the named ClusterTrustBundle is allowed not to exist. If using signerName, then the combination of signerName and labelSelector is allowed to match zero ClusterTrustBundles. path string Relative path from the volume root to write the bundle. signerName string Select all ClusterTrustBundles that match this signer name. Mutually-exclusive with name. The contents of all selected ClusterTrustBundles will be unified and deduplicated. 3.1.299. .template.spec.volumes[].projected.sources[].configMap Description Adapts a ConfigMap into a projected volume. The contents of the target ConfigMap's Data field will be presented in a projected volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. Note that this is identical to a configmap volume source without the default mode. Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional specify whether the ConfigMap or its keys must be defined 3.1.300. .template.spec.volumes[].projected.sources[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.301. .template.spec.volumes[].projected.sources[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.302. .template.spec.volumes[].projected.sources[].downwardAPI Description Represents downward API info for projecting into a projected volume. Note that this is identical to a downwardAPI volume source without the default mode. Type object Property Type Description items array Items is a list of DownwardAPIVolume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 3.1.303. .template.spec.volumes[].projected.sources[].downwardAPI.items Description Items is a list of DownwardAPIVolume file Type array 3.1.304. .template.spec.volumes[].projected.sources[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format 3.1.305. .template.spec.volumes[].projected.sources[].downwardAPI.items[].fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.306. .template.spec.volumes[].projected.sources[].downwardAPI.items[].resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.307. .template.spec.volumes[].projected.sources[].secret Description Adapts a secret into a projected volume. The contents of the target Secret's Data field will be presented in a projected volume as files using the keys in the Data field as the file names. Note that this is identical to a secret volume source without the default mode. Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional field specify whether the Secret or its key must be defined 3.1.308. .template.spec.volumes[].projected.sources[].secret.items Description items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.309. .template.spec.volumes[].projected.sources[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.310. .template.spec.volumes[].projected.sources[].serviceAccountToken Description ServiceAccountTokenProjection represents a projected service account token volume. This projection can be used to insert a service account token into the pods runtime filesystem for use against APIs (Kubernetes API Server or otherwise). Type object Required path Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 3.1.311. .template.spec.volumes[].quobyte Description Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling. Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 3.1.312. .template.spec.volumes[].rbd Description Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling. Type object Required monitors image Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 3.1.313. .template.spec.volumes[].rbd.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 3.1.314. .template.spec.volumes[].scaleIO Description ScaleIOVolumeSource represents a persistent ScaleIO volume Type object Required gateway system secretRef Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 3.1.315. .template.spec.volumes[].scaleIO.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 3.1.316. .template.spec.volumes[].secret Description Adapts a Secret into a volume. The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling. Type object Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 3.1.317. .template.spec.volumes[].secret.items Description items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.318. .template.spec.volumes[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.319. .template.spec.volumes[].storageos Description Represents a StorageOS persistent volume resource. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 3.1.320. .template.spec.volumes[].storageos.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 3.1.321. .template.spec.volumes[].vsphereVolume Description Represents a vSphere volume resource. Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 3.2. API endpoints The following API endpoints are available: /api/v1/podtemplates GET : list or watch objects of kind PodTemplate /api/v1/watch/podtemplates GET : watch individual changes to a list of PodTemplate. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/podtemplates DELETE : delete collection of PodTemplate GET : list or watch objects of kind PodTemplate POST : create a PodTemplate /api/v1/watch/namespaces/{namespace}/podtemplates GET : watch individual changes to a list of PodTemplate. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/podtemplates/{name} DELETE : delete a PodTemplate GET : read the specified PodTemplate PATCH : partially update the specified PodTemplate PUT : replace the specified PodTemplate /api/v1/watch/namespaces/{namespace}/podtemplates/{name} GET : watch changes to an object of kind PodTemplate. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 3.2.1. /api/v1/podtemplates HTTP method GET Description list or watch objects of kind PodTemplate Table 3.1. HTTP responses HTTP code Reponse body 200 - OK PodTemplateList schema 401 - Unauthorized Empty 3.2.2. /api/v1/watch/podtemplates HTTP method GET Description watch individual changes to a list of PodTemplate. deprecated: use the 'watch' parameter with a list operation instead. Table 3.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /api/v1/namespaces/{namespace}/podtemplates HTTP method DELETE Description delete collection of PodTemplate Table 3.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind PodTemplate Table 3.5. HTTP responses HTTP code Reponse body 200 - OK PodTemplateList schema 401 - Unauthorized Empty HTTP method POST Description create a PodTemplate Table 3.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.7. Body parameters Parameter Type Description body PodTemplate schema Table 3.8. HTTP responses HTTP code Reponse body 200 - OK PodTemplate schema 201 - Created PodTemplate schema 202 - Accepted PodTemplate schema 401 - Unauthorized Empty 3.2.4. /api/v1/watch/namespaces/{namespace}/podtemplates HTTP method GET Description watch individual changes to a list of PodTemplate. deprecated: use the 'watch' parameter with a list operation instead. Table 3.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.5. /api/v1/namespaces/{namespace}/podtemplates/{name} Table 3.10. Global path parameters Parameter Type Description name string name of the PodTemplate HTTP method DELETE Description delete a PodTemplate Table 3.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.12. HTTP responses HTTP code Reponse body 200 - OK PodTemplate schema 202 - Accepted PodTemplate schema 401 - Unauthorized Empty HTTP method GET Description read the specified PodTemplate Table 3.13. HTTP responses HTTP code Reponse body 200 - OK PodTemplate schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PodTemplate Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.15. HTTP responses HTTP code Reponse body 200 - OK PodTemplate schema 201 - Created PodTemplate schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PodTemplate Table 3.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.17. Body parameters Parameter Type Description body PodTemplate schema Table 3.18. HTTP responses HTTP code Reponse body 200 - OK PodTemplate schema 201 - Created PodTemplate schema 401 - Unauthorized Empty 3.2.6. /api/v1/watch/namespaces/{namespace}/podtemplates/{name} Table 3.19. Global path parameters Parameter Type Description name string name of the PodTemplate HTTP method GET Description watch changes to an object of kind PodTemplate. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/template_apis/podtemplate-v1
8.2. Memory Tuning Tips
8.2. Memory Tuning Tips To optimize memory performance in a virtualized environment, consider the following: Do not allocate more resources to guest than it will use. If possible, assign a guest to a single NUMA node, providing that resources are sufficient on that NUMA node. For more information on using NUMA, see Chapter 9, NUMA . When increasing the amount of memory a guest virtual machine can use while the guest is running, also referred to as hot plugging , the memory needs to be manually brought online on the guest by one of the following methods: Create a custom udev rule Create a file with a name that ends in the .rules suffix in the /etc/udev/rules.d/ directory: Add the memory onlining to the created file: Reload udev rules: Bring inactive memory online manually after each hot plug
[ "touch /etc/udev/rules.d/ rulename .rules", "echo 'SUBSYSTEM==\"memory\", ACTION==\"add\", ATTR{state}==\"offline\", ATTR{state}=\"online\"' > /etc/udev/rules.d/ rulename .rules", "udevadm control --reload", "for mblock in /sys/devices/system/memory/memory*; do echo online > USDmblock/state; done" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-memory-general_tips
Chapter 5. Machine phases and lifecycle
Chapter 5. Machine phases and lifecycle Machines move through a lifecycle that has several defined phases. Understanding the machine lifecycle and its phases can help you verify whether a procedure is complete or troubleshoot undesired behavior. In OpenShift Container Platform, the machine lifecycle is consistent across all supported cloud providers. 5.1. Machine phases As a machine moves through its lifecycle, it passes through different phases. Each phase is a basic representation of the state of the machine. Provisioning There is a request to provision a new machine. The machine does not yet exist and does not have an instance, a provider ID, or an address. Provisioned The machine exists and has a provider ID or an address. The cloud provider has created an instance for the machine. The machine has not yet become a node and the status.nodeRef section of the machine object is not yet populated. Running The machine exists and has a provider ID or address. Ignition has run successfully and the cluster machine approver has approved a certificate signing request (CSR). The machine has become a node and the status.nodeRef section of the machine object contains node details. Deleting There is a request to delete the machine. The machine object has a DeletionTimestamp field that indicates the time of the deletion request. Failed There is an unrecoverable problem with the machine. This can happen, for example, if the cloud provider deletes the instance for the machine. 5.2. The machine lifecycle The lifecycle begins with the request to provision a machine and continues until the machine no longer exists. The machine lifecycle proceeds in the following order. Interruptions due to errors or lifecycle hooks are not included in this overview. There is a request to provision a new machine for one of the following reasons: A cluster administrator scales a machine set such that it requires additional machines. An autoscaling policy scales machine set such that it requires additional machines. A machine that is managed by a machine set fails or is deleted and the machine set creates a replacement to maintain the required number of machines. The machine enters the Provisioning phase. The infrastructure provider creates an instance for the machine. The machine has a provider ID or address and enters the Provisioned phase. The Ignition configuration file is processed. The kubelet issues a certificate signing request (CSR). The cluster machine approver approves the CSR. The machine becomes a node and enters the Running phase. An existing machine is slated for deletion for one of the following reasons: A user with cluster-admin permissions uses the oc delete machine command. The machine gets a machine.openshift.io/delete-machine annotation. The machine set that manages the machine marks it for deletion to reduce the replica count as part of reconciliation. The cluster autoscaler identifies a node that is unnecessary to meet the deployment needs of the cluster. A machine health check is configured to replace an unhealthy machine. The machine enters the Deleting phase, in which it is marked for deletion but is still present in the API. The machine controller removes the instance from the infrastructure provider. The machine controller deletes the Node object. 5.3. Determining the phase of a machine You can find the phase of a machine by using the OpenShift CLI ( oc ) or by using the web console. You can use this information to verify whether a procedure is complete or to troubleshoot undesired behavior. 5.3.1. Determining the phase of a machine by using the CLI You can find the phase of a machine by using the OpenShift CLI ( oc ). Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have installed the oc CLI. Procedure List the machines on the cluster by running the following command: USD oc get machine -n openshift-machine-api Example output NAME PHASE TYPE REGION ZONE AGE mycluster-5kbsp-master-0 Running m6i.xlarge us-west-1 us-west-1a 4h55m mycluster-5kbsp-master-1 Running m6i.xlarge us-west-1 us-west-1b 4h55m mycluster-5kbsp-master-2 Running m6i.xlarge us-west-1 us-west-1a 4h55m mycluster-5kbsp-worker-us-west-1a-fmx8t Running m6i.xlarge us-west-1 us-west-1a 4h51m mycluster-5kbsp-worker-us-west-1a-m889l Running m6i.xlarge us-west-1 us-west-1a 4h51m mycluster-5kbsp-worker-us-west-1b-c8qzm Running m6i.xlarge us-west-1 us-west-1b 4h51m The PHASE column of the output contains the phase of each machine. 5.3.2. Determining the phase of a machine by using the web console You can find the phase of a machine by using the OpenShift Container Platform web console. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Log in to the web console as a user with the cluster-admin role. Navigate to Compute Machines . On the Machines page, select the name of the machine that you want to find the phase of. On the Machine details page, select the YAML tab. In the YAML block, find the value of the status.phase field. Example YAML snippet apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: name: mycluster-5kbsp-worker-us-west-1a-fmx8t # ... status: phase: Running 1 1 In this example, the phase is Running . 5.4. Additional resources Lifecycle hooks for the machine deletion phase
[ "oc get machine -n openshift-machine-api", "NAME PHASE TYPE REGION ZONE AGE mycluster-5kbsp-master-0 Running m6i.xlarge us-west-1 us-west-1a 4h55m mycluster-5kbsp-master-1 Running m6i.xlarge us-west-1 us-west-1b 4h55m mycluster-5kbsp-master-2 Running m6i.xlarge us-west-1 us-west-1a 4h55m mycluster-5kbsp-worker-us-west-1a-fmx8t Running m6i.xlarge us-west-1 us-west-1a 4h51m mycluster-5kbsp-worker-us-west-1a-m889l Running m6i.xlarge us-west-1 us-west-1a 4h51m mycluster-5kbsp-worker-us-west-1b-c8qzm Running m6i.xlarge us-west-1 us-west-1b 4h51m", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: name: mycluster-5kbsp-worker-us-west-1a-fmx8t status: phase: Running 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/machine_management/machine-phases-lifecycle
Chapter 4. AMQ Streams Operators
Chapter 4. AMQ Streams Operators AMQ Streams supports Kafka using Operators to deploy and manage the components and dependencies of Kafka to OpenShift. Operators are a method of packaging, deploying, and managing an OpenShift application. AMQ Streams Operators extend OpenShift functionality, automating common and complex tasks related to a Kafka deployment. By implementing knowledge of Kafka operations in code, Kafka administration tasks are simplified and require less manual intervention. Operators AMQ Streams provides Operators for managing a Kafka cluster running within an OpenShift cluster. Cluster Operator Deploys and manages Apache Kafka clusters, Kafka Connect, Kafka MirrorMaker, Kafka Bridge, Kafka Exporter, and the Entity Operator Entity Operator Comprises the Topic Operator and User Operator Topic Operator Manages Kafka topics User Operator Manages Kafka users The Cluster Operator can deploy the Topic Operator and User Operator as part of an Entity Operator configuration at the same time as a Kafka cluster. Operators within the AMQ Streams architecture 4.1. Cluster Operator AMQ Streams uses the Cluster Operator to deploy and manage clusters for: Kafka (including ZooKeeper, Entity Operator, Kafka Exporter, and Cruise Control) Kafka Connect Kafka MirrorMaker Kafka Bridge Custom resources are used to deploy the clusters. For example, to deploy a Kafka cluster: A Kafka resource with the cluster configuration is created within the OpenShift cluster. The Cluster Operator deploys a corresponding Kafka cluster, based on what is declared in the Kafka resource. The Cluster Operator can also deploy (through configuration of the Kafka resource): A Topic Operator to provide operator-style topic management through KafkaTopic custom resources A User Operator to provide operator-style user management through KafkaUser custom resources The Topic Operator and User Operator function within the Entity Operator on deployment. Example architecture for the Cluster Operator 4.2. Topic Operator The Topic Operator provides a way of managing topics in a Kafka cluster through OpenShift resources. Example architecture for the Topic Operator The role of the Topic Operator is to keep a set of KafkaTopic OpenShift resources describing Kafka topics in-sync with corresponding Kafka topics. Specifically, if a KafkaTopic is: Created, the Topic Operator creates the topic Deleted, the Topic Operator deletes the topic Changed, the Topic Operator updates the topic Working in the other direction, if a topic is: Created within the Kafka cluster, the Operator creates a KafkaTopic Deleted from the Kafka cluster, the Operator deletes the KafkaTopic Changed in the Kafka cluster, the Operator updates the KafkaTopic This allows you to declare a KafkaTopic as part of your application's deployment and the Topic Operator will take care of creating the topic for you. Your application just needs to deal with producing or consuming from the necessary topics. If the topic is reconfigured or reassigned to different Kafka nodes, the KafkaTopic will always be up to date. 4.3. User Operator The User Operator manages Kafka users for a Kafka cluster by watching for KafkaUser resources that describe Kafka users, and ensuring that they are configured properly in the Kafka cluster. For example, if a KafkaUser is: Created, the User Operator creates the user it describes Deleted, the User Operator deletes the user it describes Changed, the User Operator updates the user it describes Unlike the Topic Operator, the User Operator does not sync any changes from the Kafka cluster with the OpenShift resources. Kafka topics can be created by applications directly in Kafka, but it is not expected that the users will be managed directly in the Kafka cluster in parallel with the User Operator. The User Operator allows you to declare a KafkaUser resource as part of your application's deployment. You can specify the authentication and authorization mechanism for the user. You can also configure user quotas that control usage of Kafka resources to ensure, for example, that a user does not monopolize access to a broker. When the user is created, the user credentials are created in a Secret . Your application needs to use the user and its credentials for authentication and to produce or consume messages. In addition to managing credentials for authentication, the User Operator also manages authorization rules by including a description of the user's access rights in the KafkaUser declaration.
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/amq_streams_on_openshift_overview/overview-components_str
Chapter 15. Mail Servers
Chapter 15. Mail Servers Red Hat Enterprise Linux offers many advanced applications to serve and access email. This chapter describes modern email protocols in use today, and some of the programs designed to send and receive email. 15.1. Email Protocols Today, email is delivered using a client/server architecture. An email message is created using a mail client program. This program then sends the message to a server. The server then forwards the message to the recipient's email server, where the message is then supplied to the recipient's email client. To enable this process, a variety of standard network protocols allow different machines, often running different operating systems and using different email programs, to send and receive email. The following protocols discussed are the most commonly used in the transfer of email. 15.1.1. Mail Transport Protocols Mail delivery from a client application to the server, and from an originating server to the destination server, is handled by the Simple Mail Transfer Protocol ( SMTP ). 15.1.1.1. SMTP The primary purpose of SMTP is to transfer email between mail servers. However, it is critical for email clients as well. To send email, the client sends the message to an outgoing mail server, which in turn contacts the destination mail server for delivery. But more intermediate SMTP servers may be included in this chain. This concept is called a mail relaying. For this reason, it is necessary to specify an SMTP server when configuring an email client. Under Red Hat Enterprise Linux, a user can configure an SMTP server on the local machine to handle mail delivery. However, it is also possible to configure remote SMTP servers for outgoing mail. One important point to make about the SMTP protocol is that it does not require authentication. This allows anyone on the Internet to send email to anyone else or even to large groups of people. It is this characteristic of SMTP that makes junk email or spam possible. Imposing relay restrictions limits random users on the Internet from sending email through your SMTP server, to other servers on the internet. Servers that do not impose such restrictions are called open relay servers. Red Hat Enterprise Linux 7 provides the Postfix and Sendmail SMTP programs. 15.1.2. Mail Access Protocols There are two primary protocols used by email client applications to retrieve email from mail servers: the Post Office Protocol ( POP ) and the Internet Message Access Protocol ( IMAP ). 15.1.2.1. POP The default POP server under Red Hat Enterprise Linux is Dovecot and is provided by the dovecot package. Note To install Dovecot run the following command: For more information on installing packages with Yum, see Section 9.2.4, "Installing Packages" . When using a POP server, email messages are downloaded by email client applications. By default, most POP email clients are automatically configured to delete the message on the email server after it has been successfully transferred, however this setting usually can be changed. POP is fully compatible with important Internet messaging standards, such as Multipurpose Internet Mail Extensions ( MIME ), which allow for email attachments. POP works best for users who have one system on which to read email. It also works well for users who do not have a persistent connection to the Internet or the network containing the mail server. Unfortunately for those with slow network connections, POP requires client programs upon authentication to download the entire content of each message. This can take a long time if any messages have large attachments. The most current version of the standard POP protocol is POP3 . There are, however, a variety of lesser-used POP protocol variants: APOP - POP3 with MD5 authentication. An encoded hash of the user's password is sent from the email client to the server rather than sending an unencrypted password. KPOP - POP3 with Kerberos authentication. RPOP - POP3 with RPOP authentication. This uses a per-user ID, similar to a password, to authenticate POP requests. However, this ID is not encrypted, so RPOP is no more secure than standard POP . To improve security, you can use Secure Socket Layer ( SSL ) encryption for client authentication and data transfer sessions. To enable SSL encryption, use: The pop3s service The stunnel application The starttls command For more information on securing email communication, see Section 15.5.1, "Securing Communication" . 15.1.2.2. IMAP The default IMAP server under Red Hat Enterprise Linux is Dovecot and is provided by the dovecot package. See Section 15.1.2.1, "POP" for information on how to install Dovecot . When using an IMAP mail server, email messages remain on the server where users can read or delete them. IMAP also allows client applications to create, rename, or delete mail directories on the server to organize and store email. IMAP is particularly useful for users who access their email using multiple machines. The protocol is also convenient for users connecting to the mail server via a slow connection, because only the email header information is downloaded for messages until opened, saving bandwidth. The user also has the ability to delete messages without viewing or downloading them. For convenience, IMAP client applications are capable of caching copies of messages locally, so the user can browse previously read messages when not directly connected to the IMAP server. IMAP , like POP , is fully compatible with important Internet messaging standards, such as MIME, which allow for email attachments. For added security, it is possible to use SSL encryption for client authentication and data transfer sessions. This can be enabled by using the imaps service, or by using the stunnel program. The pop3s service The stunnel application The starttls command For more information on securing email communication, see Section 15.5.1, "Securing Communication" . Other free, as well as commercial, IMAP clients and servers are available, many of which extend the IMAP protocol and provide additional functionality. 15.1.2.3. Dovecot The imap-login and pop3-login processes which implement the IMAP and POP3 protocols are spawned by the master dovecot daemon included in the dovecot package. The use of IMAP and POP is configured through the /etc/dovecot/dovecot.conf configuration file; by default dovecot runs IMAP and POP3 together with their secure versions using SSL . To configure dovecot to use POP , complete the following steps: Edit the /etc/dovecot/dovecot.conf configuration file to make sure the protocols variable is uncommented (remove the hash sign ( # ) at the beginning of the line) and contains the pop3 argument. For example: When the protocols variable is left commented out, dovecot will use the default values as described above. Make the change operational for the current session by running the following command as root : Make the change operational after the reboot by running the command: Note Please note that dovecot only reports that it started the IMAP server, but also starts the POP3 server. Unlike SMTP , both IMAP and POP3 require connecting clients to authenticate using a user name and password. By default, passwords for both protocols are passed over the network unencrypted. To configure SSL on dovecot : Edit the /etc/dovecot/conf.d/10-ssl.conf configuration to make sure the ssl_protocols variable is uncommented and contains the !SSLv2 !SSLv3 arguments: These values ensure that dovecot avoids SSL versions 2 and also 3, which are both known to be insecure. This is due to the vulnerability described in POODLE: SSLv3 vulnerability (CVE-2014-3566) . See Resolution for POODLE SSL 3.0 vulnerability (CVE-2014-3566) in Postfix and Dovecot for details. Make sure that /etc/dovecot/conf.d/10-ssl.conf contains the following option: Edit the /etc/pki/dovecot/dovecot-openssl.cnf configuration file as you prefer. However, in a typical installation, this file does not require modification. Rename, move or delete the files /etc/pki/dovecot/certs/dovecot.pem and /etc/pki/dovecot/private/dovecot.pem . Execute the /usr/libexec/dovecot/mkcert.sh script which creates the dovecot self signed certificates. These certificates are copied in the /etc/pki/dovecot/certs and /etc/pki/dovecot/private directories. To implement the changes, restart dovecot by issuing the following command as root : More details on dovecot can be found online at http://www.dovecot.org . 15.2. Email Program Classifications In general, all email applications fall into at least one of three classifications. Each classification plays a specific role in the process of moving and managing email messages. While most users are only aware of the specific email program they use to receive and send messages, each one is important for ensuring that email arrives at the correct destination. 15.2.1. Mail Transport Agent A Mail Transport Agent ( MTA ) transports email messages between hosts using SMTP . A message may involve several MTAs as it moves to its intended destination. While the delivery of messages between machines may seem rather straightforward, the entire process of deciding if a particular MTA can or should accept a message for delivery is quite complicated. In addition, due to problems from spam, use of a particular MTA is usually restricted by the MTA's configuration or the access configuration for the network on which the MTA resides. Some email client programs, can act as an MTA when sending an email. However, such email client programs do not have the role of a true MTA, because they can only send outbound messages to an MTA they are authorized to use, but they cannot directly deliver the message to the intended recipient's email server. This functionality is useful if host running the application does not have its own MTA. Since Red Hat Enterprise Linux offers two MTAs, Postfix and Sendmail , email client programs are often not required to act as an MTA. Red Hat Enterprise Linux also includes a special purpose MTA called Fetchmail . For more information on Postfix, Sendmail, and Fetchmail, see Section 15.3, "Mail Transport Agents" . 15.2.2. Mail Delivery Agent A Mail Delivery Agent ( MDA ) is invoked by the MTA to file incoming email in the proper user's mailbox. In many cases, the MDA is actually a Local Delivery Agent ( LDA ), such as mail or Procmail. Any program that actually handles a message for delivery to the point where it can be read by an email client application can be considered an MDA. For this reason, some MTAs (such as Sendmail and Postfix) can fill the role of an MDA when they append new email messages to a local user's mail spool file. In general, MDAs do not transport messages between systems nor do they provide a user interface; MDAs distribute and sort messages on the local machine for an email client application to access. 15.2.3. Mail User Agent A Mail User Agent ( MUA ) is synonymous with an email client application. MUA is a program that, at a minimum, allows a user to read and compose email messages. MUAs can handle these tasks: Retrieving messages via the POP or IMAP protocols Setting up mailboxes to store messages. Sending outbound messages to an MTA. MUAs may be graphical, such as Thunderbird , Evolution , or have simple text-based interfaces, such as mail or Mutt . 15.3. Mail Transport Agents Red Hat Enterprise Linux 7 offers two primary MTAs: Postfix and Sendmail. Postfix is configured as the default MTA and Sendmail is considered deprecated. If required to switch the default MTA to Sendmail, you can either uninstall Postfix or use the following command as root to switch to Sendmail: You can also use the following command to enable the desired service: Similarly, to disable the service, type the following at a shell prompt: For more information on how to manage system services in Red Hat Enterprise Linux 7, see Chapter 10, Managing Services with systemd . 15.3.1. Postfix Originally developed at IBM by security expert and programmer Wietse Venema, Postfix is a Sendmail-compatible MTA that is designed to be secure, fast, and easy to configure. To improve security, Postfix uses a modular design, where small processes with limited privileges are launched by a master daemon. The smaller, less privileged processes perform very specific tasks related to the various stages of mail delivery and run in a changed root environment to limit the effects of attacks. Configuring Postfix to accept network connections from hosts other than the local computer takes only a few minor changes in its configuration file. Yet for those with more complex needs, Postfix provides a variety of configuration options, as well as third party add-ons that make it a very versatile and full-featured MTA. The configuration files for Postfix are human readable and support upward of 250 directives. Unlike Sendmail, no macro processing is required for changes to take effect and the majority of the most commonly used options are described in the heavily commented files. 15.3.1.1. The Default Postfix Installation The Postfix executable is postfix . This daemon launches all related processes needed to handle mail delivery. Postfix stores its configuration files in the /etc/postfix/ directory. The following is a list of the more commonly used files: access - Used for access control, this file specifies which hosts are allowed to connect to Postfix. main.cf - The global Postfix configuration file. The majority of configuration options are specified in this file. master.cf - Specifies how Postfix interacts with various processes to accomplish mail delivery. transport - Maps email addresses to relay hosts. The aliases file can be found in the /etc directory. This file is shared between Postfix and Sendmail. It is a configurable list required by the mail protocol that describes user ID aliases. Important The default /etc/postfix/main.cf file does not allow Postfix to accept network connections from a host other than the local computer. For instructions on configuring Postfix as a server for other clients, see Section 15.3.1.3, "Basic Postfix Configuration" . Restart the postfix service after changing any options in the configuration files under the /etc/postfix/ directory in order for those changes to take effect. To do so, run the following command as root : 15.3.1.2. Upgrading From a Release The following settings in Red Hat Enterprise Linux 7 are different to releases: disable_vrfy_command = no - This is disabled by default, which is different to the default for Sendmail. If changed to yes it can prevent certain email address harvesting methods. allow_percent_hack = yes - This is enabled by default. It allows removing % characters in email addresses. The percent hack is an old workaround that allowed sender-controlled routing of email messages. DNS and mail routing are now much more reliable, but Postfix continues to support the hack. To turn off percent rewriting, set allow_percent_hack to no . smtpd_helo_required = no - This is disabled by default, as it is in Sendmail, because it can prevent some applications from sending mail. It can be changed to yes to require clients to send the HELO or EHLO commands before attempting to send the MAIL, FROM, or ETRN commands. 15.3.1.3. Basic Postfix Configuration By default, Postfix does not accept network connections from any host other than the local host. Perform the following steps as root to enable mail delivery for other hosts on the network: Edit the /etc/postfix/main.cf file with a text editor, such as vi . Uncomment the mydomain line by removing the hash sign ( # ), and replace domain.tld with the domain the mail server is servicing, such as example.com . Uncomment the myorigin = USDmydomain line. Uncomment the myhostname line, and replace host.domain.tld with the host name for the machine. Uncomment the mydestination = USDmyhostname, localhost.USDmydomain line. Uncomment the mynetworks line, and replace 168.100.189.0/28 with a valid network setting for hosts that can connect to the server. Uncomment the inet_interfaces = all line. Comment the inet_interfaces = localhost line. Restart the postfix service. Once these steps are complete, the host accepts outside emails for delivery. Postfix has a large assortment of configuration options. One of the best ways to learn how to configure Postfix is to read the comments within the /etc/postfix/main.cf configuration file. Additional resources including information about Postfix configuration, SpamAssassin integration, or detailed descriptions of the /etc/postfix/main.cf parameters are available online at http://www.postfix.org/ . Important Due to the vulnerability described in POODLE: SSLv3 vulnerability (CVE-2014-3566) , Red Hat recommends disabling SSL and using only TLSv1.1 or TLSv1.2 . See Resolution for POODLE SSL 3.0 vulnerability (CVE-2014-3566) in Postfix and Dovecot for details. 15.3.1.4. Using Postfix with LDAP Postfix can use an LDAP directory as a source for various lookup tables (for example, aliases , virtual , canonical , and so on). This allows LDAP to store hierarchical user information and Postfix to only be given the result of LDAP queries when needed. By not storing this information locally, administrators can easily maintain it. 15.3.1.4.1. The /etc/aliases lookup example The following is a basic example for using LDAP to look up the /etc/aliases file. Make sure your /etc/postfix/main.cf file contains the following: Create a /etc/postfix/ldap-aliases.cf file if you do not have one already and make sure it contains the following: where ldap.example.com , example , and com are parameters that need to be replaced with specification of an existing available LDAP server. Note The /etc/postfix/ldap-aliases.cf file can specify various parameters, including parameters that enable LDAP SSL and STARTTLS . For more information, see the ldap_table(5) man page. For more information on LDAP , see OpenLDAP in the System-Level Authentication Guide . 15.3.2. Sendmail Sendmail's core purpose, like other MTAs, is to safely transfer email between hosts, usually using the SMTP protocol. Note that Sendmail is considered deprecated and administrators are encouraged to use Postfix when possible. See Section 15.3.1, "Postfix" for more information. 15.3.2.1. Purpose and Limitations It is important to be aware of what Sendmail is and what it can do, as opposed to what it is not. In these days of monolithic applications that fulfill multiple roles, Sendmail may seem like the only application needed to run an email server within an organization. Technically, this is true, as Sendmail can spool mail to each users' directory and deliver outbound mail for users. However, most users actually require much more than simple email delivery. Users usually want to interact with their email using an MUA, that uses POP or IMAP , to download their messages to their local machine. Or, they may prefer a Web interface to gain access to their mailbox. These other applications can work in conjunction with Sendmail, but they actually exist for different reasons and can operate separately from one another. It is beyond the scope of this section to go into all that Sendmail should or could be configured to do. With literally hundreds of different options and rule sets, entire volumes have been dedicated to helping explain everything that can be done and how to fix things that go wrong. See the Section 15.7, "Additional Resources" for a list of Sendmail resources. This section reviews the files installed with Sendmail by default and reviews basic configuration changes, including how to stop unwanted email (spam) and how to extend Sendmail with the Lightweight Directory Access Protocol (LDAP) . 15.3.2.2. The Default Sendmail Installation In order to use Sendmail, first ensure the sendmail package is installed on your system by running, as root : In order to configure Sendmail, ensure the sendmail-cf package is installed on your system by running, as root : For more information on installing packages with Yum, see Section 9.2.4, "Installing Packages" . Before using Sendmail, the default MTA has to be switched from Postfix. For more information how to switch the default MTA refer to Section 15.3, "Mail Transport Agents" . The Sendmail executable is sendmail . Sendmail configuration file is located at /etc/mail/sendmail.cf . Avoid editing the sendmail.cf file directly. To make configuration changes to Sendmail, edit the /etc/mail/sendmail.mc file, back up the original /etc/mail/sendmail.cf file, and restart the sendmail service. As a part of the restart, the sendmail.cf file and all binary representations of the databases are rebuild: More information on configuring Sendmail can be found in Section 15.3.2.3, "Common Sendmail Configuration Changes" . Various Sendmail configuration files are installed in the /etc/mail/ directory including: access - Specifies which systems can use Sendmail for outbound email. domaintable - Specifies domain name mapping. local-host-names - Specifies aliases for the host. mailertable - Specifies instructions that override routing for particular domains. virtusertable - Specifies a domain-specific form of aliasing, allowing multiple virtual domains to be hosted on one machine. Several configuration files in the /etc/mail/ directory, such as access , domaintable , mailertable and virtusertable , store their information in database files before Sendmail can use any configuration changes. To include any changes made to these configurations in their database files, run the following command: 15.3.2.3. Common Sendmail Configuration Changes When altering the Sendmail configuration file, it is best not to edit an existing file, but to generate an entirely new /etc/mail/sendmail.cf file. Warning Before replacing or making any changes to the sendmail.cf file, create a backup copy. To add the desired functionality to Sendmail, edit the /etc/mail/sendmail.mc file as root . Once you are finished, restart the sendmail service and, if the m4 package is installed, the m4 macro processor will automatically generate a new sendmail.cf configuration file: Important The default sendmail.cf file does not allow Sendmail to accept network connections from any host other than the local computer. To configure Sendmail as a server for other clients, edit the /etc/mail/sendmail.mc file, and either change the address specified in the Addr= option of the DAEMON_OPTIONS directive from 127.0.0.1 to the IP address of an active network device or comment out the DAEMON_OPTIONS directive all together by placing dnl at the beginning of the line. When finished, regenerate /etc/mail/sendmail.cf by restarting the service: The default configuration in Red Hat Enterprise Linux works for most SMTP -only sites. Consult the /usr/share/sendmail-cf/README file before editing any files in the directories under the /usr/share/sendmail-cf/ directory, as they can affect the future configuration of the /etc/mail/sendmail.cf file. 15.3.2.4. Masquerading One common Sendmail configuration is to have a single machine act as a mail gateway for all machines on the network. For example, a company may want to have a machine called mail.example.com that handles all of their email and assigns a consistent return address to all outgoing mail. In this situation, the Sendmail server must masquerade the machine names on the company network so that their return address is [email protected] instead of [email protected] . To do this, add the following lines to /etc/mail/sendmail.mc : After generating a new sendmail.cf file from the changed configuration in sendmail.mc, restart the sendmail service by a following command: Note that administrators of mail servers, DNS and DHCP servers, as well as any provisioning applications, should agree on the host name format used in an organization. See the Red Hat Enterprise Linux 7 Networking Guide for more information on recommended naming practices. 15.3.2.5. Stopping Spam Email spam can be defined as unnecessary and unwanted email received by a user who never requested the communication. It is a disruptive, costly, and widespread abuse of Internet communication standards. Sendmail makes it relatively easy to block new spamming techniques being employed to send junk email. It even blocks many of the more usual spamming methods by default. Main anti-spam features available in sendmail are header checks , relaying denial (default from version 8.9), access database and sender information checks . For example, forwarding of SMTP messages, also called relaying, has been disabled by default since Sendmail version 8.9. Before this change occurred, Sendmail directed the mail host ( x.edu ) to accept messages from one party ( y.com ) and sent them to a different party ( z.net ). Now, however, Sendmail must be configured to permit any domain to relay mail through the server. To configure relay domains, edit the /etc/mail/relay-domains file and restart Sendmail However, servers on the Internet can also send spam messages. In these instances, Sendmail's access control features available through the /etc/mail/access file can be used to prevent connections from unwanted hosts. The following example illustrates how this file can be used to both block and specifically allow access to the Sendmail server: This example shows that any email sent from badspammer.com is blocked with a 550 RFC-821 compliant error code, with a message sent back. Emails sent from the tux.badspammer.com sub-domain are accepted. The last line shows that any email sent from the 10.0. . network can be relayed through the mail server. Because the /etc/mail/access.db file is a database, use the following command to update any changes: The above examples only represent a small part of what Sendmail can do in terms of allowing or blocking access. See the /usr/share/sendmail-cf/README file for more information and examples. Since Sendmail calls the Procmail MDA when delivering mail, it is also possible to use a spam filtering program, such as SpamAssassin, to identify and file spam for users. See Section 15.4.2.6, "Spam Filters" for more information about using SpamAssassin. 15.3.2.6. Using Sendmail with LDAP Using LDAP is a very quick and powerful way to find specific information about a particular user from a much larger group. For example, an LDAP server can be used to look up a particular email address from a common corporate directory by the user's last name. In this kind of implementation, LDAP is largely separate from Sendmail, with LDAP storing the hierarchical user information and Sendmail only being given the result of LDAP queries in pre-addressed email messages. However, Sendmail supports a much greater integration with LDAP , where it uses LDAP to replace separately maintained files, such as /etc/aliases and /etc/mail/virtusertables , on different mail servers that work together to support a medium- to enterprise-level organization. In short, LDAP abstracts the mail routing level from Sendmail and its separate configuration files to a powerful LDAP cluster that can be leveraged by many different applications. The current version of Sendmail contains support for LDAP . To extend the Sendmail server using LDAP , first get an LDAP server, such as OpenLDAP , running and properly configured. Then edit the /etc/mail/sendmail.mc to include the following: Note This is only for a very basic configuration of Sendmail with LDAP . The configuration can differ greatly from this depending on the implementation of LDAP , especially when configuring several Sendmail machines to use a common LDAP server. Consult /usr/share/sendmail-cf/README for detailed LDAP routing configuration instructions and examples. , recreate the /etc/mail/sendmail.cf file by running the m4 macro processor and again restarting Sendmail. See Section 15.3.2.3, "Common Sendmail Configuration Changes" for instructions. For more information on LDAP , see OpenLDAP in the System-Level Authentication Guide . 15.3.3. Fetchmail Fetchmail is an MTA which retrieves email from remote servers and delivers it to the local MTA. Many users appreciate the ability to separate the process of downloading their messages located on a remote server from the process of reading and organizing their email in an MUA. Designed with the needs of dial-up users in mind, Fetchmail connects and quickly downloads all of the email messages to the mail spool file using any number of protocols, including POP3 and IMAP . It can even forward email messages to an SMTP server, if necessary. Note In order to use Fetchmail , first ensure the fetchmail package is installed on your system by running, as root : For more information on installing packages with Yum, see Section 9.2.4, "Installing Packages" . Fetchmail is configured for each user through the use of a .fetchmailrc file in the user's home directory. If it does not already exist, create the .fetchmailrc file in your home directory Using preferences in the .fetchmailrc file, Fetchmail checks for email on a remote server and downloads it. It then delivers it to port 25 on the local machine, using the local MTA to place the email in the correct user's spool file. If Procmail is available, it is launched to filter the email and place it in a mailbox so that it can be read by an MUA. 15.3.3.1. Fetchmail Configuration Options Although it is possible to pass all necessary options on the command line to check for email on a remote server when executing Fetchmail, using a .fetchmailrc file is much easier. Place any desired configuration options in the .fetchmailrc file for those options to be used each time the fetchmail command is issued. It is possible to override these at the time Fetchmail is run by specifying that option on the command line. A user's .fetchmailrc file contains three classes of configuration options: global options - Gives Fetchmail instructions that control the operation of the program or provide settings for every connection that checks for email. server options - Specifies necessary information about the server being polled, such as the host name, as well as preferences for specific email servers, such as the port to check or number of seconds to wait before timing out. These options affect every user using that server. user options - Contains information, such as user name and password, necessary to authenticate and check for email using a specified email server. Global options appear at the top of the .fetchmailrc file, followed by one or more server options, each of which designate a different email server that Fetchmail should check. User options follow server options for each user account checking that email server. Like server options, multiple user options may be specified for use with a particular server as well as to check multiple email accounts on the same server. Server options are called into service in the .fetchmailrc file by the use of a special option verb, poll or skip , that precedes any of the server information. The poll action tells Fetchmail to use this server option when it is run, which checks for email using the specified user options. Any server options after a skip action, however, are not checked unless this server's host name is specified when Fetchmail is invoked. The skip option is useful when testing configurations in the .fetchmailrc file because it only checks skipped servers when specifically invoked, and does not affect any currently working configurations. The following is an example of a .fetchmailrc file: In this example, the global options specify that the user is sent email as a last resort ( postmaster option) and all email errors are sent to the postmaster instead of the sender ( bouncemail option). The set action tells Fetchmail that this line contains a global option. Then, two email servers are specified, one set to check using POP3 , the other for trying various protocols to find one that works. Two users are checked using the second server option, but all email found for any user is sent to user1 's mail spool. This allows multiple mailboxes to be checked on multiple servers, while appearing in a single MUA inbox. Each user's specific information begins with the user action. Note Users are not required to place their password in the .fetchmailrc file. Omitting the with password ' password ' section causes Fetchmail to ask for a password when it is launched. Fetchmail has numerous global, server, and local options. Many of these options are rarely used or only apply to very specific situations. The fetchmail man page explains each option in detail, but the most common ones are listed in the following three sections. 15.3.3.2. Global Options Each global option should be placed on a single line after a set action. daemon seconds - Specifies daemon-mode, where Fetchmail stays in the background. Replace seconds with the number of seconds Fetchmail is to wait before polling the server. postmaster - Specifies a local user to send mail to in case of delivery problems. syslog - Specifies the log file for errors and status messages. By default, this is /var/log/maillog . 15.3.3.3. Server Options Server options must be placed on their own line in .fetchmailrc after a poll or skip action. auth auth-type - Replace auth-type with the type of authentication to be used. By default, password authentication is used, but some protocols support other types of authentication, including kerberos_v5 , kerberos_v4 , and ssh . If the any authentication type is used, Fetchmail first tries methods that do not require a password, then methods that mask the password, and finally attempts to send the password unencrypted to authenticate to the server. interval number - Polls the specified server every number of times that it checks for email on all configured servers. This option is generally used for email servers where the user rarely receives messages. port port-number - Replace port-number with the port number. This value overrides the default port number for the specified protocol. proto protocol - Replace protocol with the protocol, such as pop3 or imap , to use when checking for messages on the server. timeout seconds - Replace seconds with the number of seconds of server inactivity after which Fetchmail gives up on a connection attempt. If this value is not set, a default of 300 seconds is used. 15.3.3.4. User Options User options may be placed on their own lines beneath a server option or on the same line as the server option. In either case, the defined options must follow the user option (defined below). fetchall - Orders Fetchmail to download all messages in the queue, including messages that have already been viewed. By default, Fetchmail only pulls down new messages. fetchlimit number - Replace number with the number of messages to be retrieved before stopping. flush - Deletes all previously viewed messages in the queue before retrieving new messages. limit max-number-bytes - Replace max-number-bytes with the maximum size in bytes that messages are allowed to be when retrieved by Fetchmail. This option is useful with slow network links, when a large message takes too long to download. password ' password ' - Replace password with the user's password. preconnect " command " - Replace command with a command to be executed before retrieving messages for the user. postconnect " command " - Replace command with a command to be executed after retrieving messages for the user. ssl - Activates SSL encryption. At the time of writing, the default action is to use the best available from SSL2 , SSL3 , SSL23 , TLS1 , TLS1.1 and TLS1.2 . Note that SSL2 is considered obsolete and due to the POODLE: SSLv3 vulnerability (CVE-2014-3566) , SSLv3 should not be used. However there is no way to force the use of TLS1 or newer, therefore ensure the mail server being connected to is configured not to use SSLv2 and SSLv3 . Use stunnel where the server cannot be configured not to use SSLv2 and SSLv3 . sslproto - Defines allowed SSL or TLS protocols. Possible values are SSL2 , SSL3 , SSL23 , and TLS1 . The default value, if sslproto is omitted, unset, or set to an invalid value, is SSL23 . The default action is to use the best from SSLv2 , SSLv3 , TLSv1 , TLS1.1 and TLS1.2 . Note that setting any other value for SSL or TLS will disable all the other protocols. Due to the POODLE: SSLv3 vulnerability (CVE-2014-3566) , it is recommend to omit this option, or set it to SSLv23 , and configure the corresponding mail server not to use SSLv2 and SSLv3 . Use stunnel where the server cannot be configured not to use SSLv2 and SSLv3 . user " username " - Replace username with the user name used by Fetchmail to retrieve messages. This option must precede all other user options. 15.3.3.5. Fetchmail Command Options Most Fetchmail options used on the command line when executing the fetchmail command mirror the .fetchmailrc configuration options. In this way, Fetchmail may be used with or without a configuration file. These options are not used on the command line by most users because it is easier to leave them in the .fetchmailrc file. There may be times when it is desirable to run the fetchmail command with other options for a particular purpose. It is possible to issue command options to temporarily override a .fetchmailrc setting that is causing an error, as any options specified at the command line override configuration file options. 15.3.3.6. Informational or Debugging Options Certain options used after the fetchmail command can supply important information. --configdump - Displays every possible option based on information from .fetchmailrc and Fetchmail defaults. No email is retrieved for any users when using this option. -s - Executes Fetchmail in silent mode, preventing any messages, other than errors, from appearing after the fetchmail command. -v - Executes Fetchmail in verbose mode, displaying every communication between Fetchmail and remote email servers. -V - Displays detailed version information, lists its global options, and shows settings to be used with each user, including the email protocol and authentication method. No email is retrieved for any users when using this option. 15.3.3.7. Special Options These options are occasionally useful for overriding defaults often found in the .fetchmailrc file. -a - Fetchmail downloads all messages from the remote email server, whether new or previously viewed. By default, Fetchmail only downloads new messages. -k - Fetchmail leaves the messages on the remote email server after downloading them. This option overrides the default behavior of deleting messages after downloading them. -l max-number-bytes - Fetchmail does not download any messages over a particular size and leaves them on the remote email server. --quit - Quits the Fetchmail daemon process. More commands and .fetchmailrc options can be found in the fetchmail man page. 15.3.4. Mail Transport Agent (MTA) Configuration A Mail Transport Agent (MTA) is essential for sending email. A Mail User Agent (MUA) such as Evolution or Mutt , is used to read and compose email. When a user sends an email from an MUA, the message is handed off to the MTA, which sends the message through a series of MTAs until it reaches its destination. Even if a user does not plan to send email from the system, some automated tasks or system programs might use the mail command to send email containing log messages to the root user of the local system. Red Hat Enterprise Linux 7 provides two MTAs: Postfix and Sendmail. If both are installed, Postfix is the default MTA. 15.4. Mail Delivery Agents Red Hat Enterprise Linux includes Procmail as primary MDA. Both applications are considered LDAs and both move email from the MTA's spool file into the user's mailbox. However, Procmail provides a robust filtering system. This section details only Procmail. For information on the mail command, consult its man page ( man mail ). Procmail delivers and filters email as it is placed in the mail spool file of the localhost. It is powerful, gentle on system resources, and widely used. Procmail can play a critical role in delivering email to be read by email client applications. Procmail can be invoked in several different ways. Whenever an MTA places an email into the mail spool file, Procmail is launched. Procmail then filters and files the email for the MUA and quits. Alternatively, the MUA can be configured to execute Procmail any time a message is received so that messages are moved into their correct mailboxes. By default, the presence of /etc/procmailrc or of a ~/.procmailrc file (also called an rc file) in the user's home directory invokes Procmail whenever an MTA receives a new message. By default, no system-wide rc files exist in the /etc directory and no .procmailrc files exist in any user's home directory. Therefore, to use Procmail, each user must construct a .procmailrc file with specific environment variables and rules. Whether Procmail acts upon an email message depends upon whether the message matches a specified set of conditions or recipes in the rc file. If a message matches a recipe, then the email is placed in a specified file, is deleted, or is otherwise processed. When Procmail starts, it reads the email message and separates the body from the header information. , Procmail looks for a /etc/procmailrc file and rc files in the /etc/procmailrcs/ directory for default, system-wide, Procmail environmental variables and recipes. Procmail then searches for a .procmailrc file in the user's home directory. Many users also create additional rc files for Procmail that are referred to within the .procmailrc file in their home directory. 15.4.1. Procmail Configuration The Procmail configuration file contains important environmental variables. These variables specify things such as which messages to sort and what to do with the messages that do not match any recipes. These environmental variables usually appear at the beginning of the ~/.procmailrc file in the following format: In this example, env-variable is the name of the variable and value defines the variable. There are many environment variables not used by most Procmail users and many of the more important environment variables are already defined by a default value. Most of the time, the following variables are used: DEFAULT - Sets the default mailbox where messages that do not match any recipes are placed. The default DEFAULT value is the same as USDORGMAIL . INCLUDERC - Specifies additional rc files containing more recipes for messages to be checked against. This breaks up the Procmail recipe lists into individual files that fulfill different roles, such as blocking spam and managing email lists, that can then be turned off or on by using comment characters in the user's ~/.procmailrc file. For example, lines in a user's ~/.procmailrc file may look like this: To turn off Procmail filtering of email lists but leaving spam control in place, comment out the first INCLUDERC line with a hash sign ( # ). Note that it uses paths relative to the current directory. LOCKSLEEP - Sets the amount of time, in seconds, between attempts by Procmail to use a particular lockfile. The default is 8 seconds. LOCKTIMEOUT - Sets the amount of time, in seconds, that must pass after a lockfile was last modified before Procmail assumes that the lockfile is old and can be deleted. The default is 1024 seconds. LOGFILE - The file to which any Procmail information or error messages are written. MAILDIR - Sets the current working directory for Procmail. If set, all other Procmail paths are relative to this directory. ORGMAIL - Specifies the original mailbox, or another place to put the messages if they cannot be placed in the default or recipe-required location. By default, a value of /var/spool/mail/USDLOGNAME is used. SUSPEND - Sets the amount of time, in seconds, that Procmail pauses if a necessary resource, such as swap space, is not available. SWITCHRC - Allows a user to specify an external file containing additional Procmail recipes, much like the INCLUDERC option, except that recipe checking is actually stopped on the referring configuration file and only the recipes on the SWITCHRC -specified file are used. VERBOSE - Causes Procmail to log more information. This option is useful for debugging. Other important environmental variables are pulled from the shell, such as LOGNAME , the login name; HOME , the location of the home directory; and SHELL , the default shell. A comprehensive explanation of all environments variables, and their default values, is available in the procmailrc man page. 15.4.2. Procmail Recipes New users often find the construction of recipes the most difficult part of learning to use Procmail. This difficulty is often attributed to recipes matching messages by using regular expressions which are used to specify qualifications for string matching. However, regular expressions are not very difficult to construct and even less difficult to understand when read. Additionally, the consistency of the way Procmail recipes are written, regardless of regular expressions, makes it easy to learn by example. To see example Procmail recipes, see Section 15.4.2.5, "Recipe Examples" . Procmail recipes take the following form: The first two characters in a Procmail recipe are a colon and a zero. Various flags can be placed after the zero to control how Procmail processes the recipe. A colon after the flags section specifies that a lockfile is created for this message. If a lockfile is created, the name can be specified by replacing lockfile-name . A recipe can contain several conditions to match against the message. If it has no conditions, every message matches the recipe. Regular expressions are placed in some conditions to facilitate message matching. If multiple conditions are used, they must all match for the action to be performed. Conditions are checked based on the flags set in the recipe's first line. Optional special characters placed after the asterisk character ( * ) can further control the condition. The action-to-perform argument specifies the action taken when the message matches one of the conditions. There can only be one action per recipe. In many cases, the name of a mailbox is used here to direct matching messages into that file, effectively sorting the email. Special action characters may also be used before the action is specified. See Section 15.4.2.4, "Special Conditions and Actions" for more information. 15.4.2.1. Delivering vs. Non-Delivering Recipes The action used if the recipe matches a particular message determines whether it is considered a delivering or non-delivering recipe. A delivering recipe contains an action that writes the message to a file, sends the message to another program, or forwards the message to another email address. A non-delivering recipe covers any other actions, such as a nesting block . A nesting block is a set of actions, contained in braces { } , that are performed on messages which match the recipe's conditions. Nesting blocks can be nested inside one another, providing greater control for identifying and performing actions on messages. When messages match a delivering recipe, Procmail performs the specified action and stops comparing the message against any other recipes. Messages that match non-delivering recipes continue to be compared against other recipes. 15.4.2.2. Flags Flags are essential to determine how or if a recipe's conditions are compared to a message. The egrep utility is used internally for matching of the conditions. The following flags are commonly used: A - Specifies that this recipe is only used if the recipe without an A or a flag also matched this message. a - Specifies that this recipe is only used if the recipe with an A or a flag also matched this message and was successfully completed. B - Parses the body of the message and looks for matching conditions. b - Uses the body in any resulting action, such as writing the message to a file or forwarding it. This is the default behavior. c - Generates a carbon copy of the email. This is useful with delivering recipes, since the required action can be performed on the message and a copy of the message can continue being processed in the rc files. D - Makes the egrep comparison case-sensitive. By default, the comparison process is not case-sensitive. E - While similar to the A flag, the conditions in the recipe are only compared to the message if the immediately preceding recipe without an E flag did not match. This is comparable to an else action. e - The recipe is compared to the message only if the action specified in the immediately preceding recipe fails. f - Uses the pipe as a filter. H - Parses the header of the message and looks for matching conditions. This is the default behavior. h - Uses the header in a resulting action. This is the default behavior. w - Tells Procmail to wait for the specified filter or program to finish, and reports whether or not it was successful before considering the message filtered. W - Is identical to w except that "Program failure" messages are suppressed. For a detailed list of additional flags, see the procmailrc man page. 15.4.2.3. Specifying a Local Lockfile Lockfiles are very useful with Procmail to ensure that more than one process does not try to alter a message simultaneously. Specify a local lockfile by placing a colon ( : ) after any flags on a recipe's first line. This creates a local lockfile based on the destination file name plus whatever has been set in the LOCKEXT global environment variable. Alternatively, specify the name of the local lockfile to be used with this recipe after the colon. 15.4.2.4. Special Conditions and Actions Special characters used before Procmail recipe conditions and actions change the way they are interpreted. The following characters may be used after the asterisk character ( * ) at the beginning of a recipe's condition line: ! - In the condition line, this character inverts the condition, causing a match to occur only if the condition does not match the message. < - Checks if the message is under a specified number of bytes. > - Checks if the message is over a specified number of bytes. The following characters are used to perform special actions: ! - In the action line, this character tells Procmail to forward the message to the specified email addresses. USD - Refers to a variable set earlier in the rc file. This is often used to set a common mailbox that is referred to by various recipes. | - Starts a specified program to process the message. { and } - Constructs a nesting block, used to contain additional recipes to apply to matching messages. If no special character is used at the beginning of the action line, Procmail assumes that the action line is specifying the mailbox in which to write the message. 15.4.2.5. Recipe Examples Procmail is an extremely flexible program, but as a result of this flexibility, composing Procmail recipes from scratch can be difficult for new users. The best way to develop the skills to build Procmail recipe conditions stems from a strong understanding of regular expressions combined with looking at many examples built by others. A thorough explanation of regular expressions is beyond the scope of this section. The structure of Procmail recipes and useful sample Procmail recipes can be found at various places on the Internet. The proper use and adaptation of regular expressions can be derived by viewing these recipe examples. In addition, introductory information about basic regular expression rules can be found in the grep(1) man page. The following simple examples demonstrate the basic structure of Procmail recipes and can provide the foundation for more intricate constructions. A basic recipe may not even contain conditions, as is illustrated in the following example: The first line specifies that a local lockfile is to be created but does not specify a name, so Procmail uses the destination file name and appends the value specified in the LOCKEXT environment variable. No condition is specified, so every message matches this recipe and is placed in the single spool file called new-mail.spool , located within the directory specified by the MAILDIR environment variable. An MUA can then view messages in this file. A basic recipe, such as this, can be placed at the end of all rc files to direct messages to a default location. The following example matched messages from a specific email address and throws them away. With this example, any messages sent by [email protected] are sent to the /dev/null device, deleting them. Warning Be certain that rules are working as intended before sending messages to /dev/null for permanent deletion. If a recipe inadvertently catches unintended messages, and those messages disappear, it becomes difficult to troubleshoot the rule. A better solution is to point the recipe's action to a special mailbox, which can be checked from time to time to look for false positives. Once satisfied that no messages are accidentally being matched, delete the mailbox and direct the action to send the messages to /dev/null . The following recipe grabs email sent from a particular mailing list and places it in a specified folder. Any messages sent from the [email protected] mailing list are placed in the tuxlug mailbox automatically for the MUA. Note that the condition in this example matches the message if it has the mailing list's email address on the From , Cc , or To lines. Consult the many Procmail online resources available in Section 15.7, "Additional Resources" for more detailed and powerful recipes. 15.4.2.6. Spam Filters Because it is called by Sendmail, Postfix, and Fetchmail upon receiving new emails, Procmail can be used as a powerful tool for combating spam. This is particularly true when Procmail is used in conjunction with SpamAssassin. When used together, these two applications can quickly identify spam emails, and sort or destroy them. SpamAssassin uses header analysis, text analysis, blacklists, a spam-tracking database, and self-learning Bayesian spam analysis to quickly and accurately identify and tag spam. Note In order to use SpamAssassin , first ensure the spamassassin package is installed on your system by running, as root : For more information on installing packages with Yum, see Section 9.2.4, "Installing Packages" . The easiest way for a local user to use SpamAssassin is to place the following line near the top of the ~/.procmailrc file: The /etc/mail/spamassassin/spamassassin-default.rc contains a simple Procmail rule that activates SpamAssassin for all incoming email. If an email is determined to be spam, it is tagged in the header as such and the title is prepended with the following pattern: The message body of the email is also prepended with a running tally of what elements caused it to be diagnosed as spam. To file email tagged as spam, a rule similar to the following can be used: This rule files all email tagged in the header as spam into a mailbox called spam . Since SpamAssassin is a Perl script, it may be necessary on busy servers to use the binary SpamAssassin daemon ( spamd ) and the client application ( spamc ). Configuring SpamAssassin this way, however, requires root access to the host. To start the spamd daemon, type the following command: To start the SpamAssassin daemon when the system is booted, run: See Chapter 10, Managing Services with systemd for more information about starting and stopping services. To configure Procmail to use the SpamAssassin client application instead of the Perl script, place the following line near the top of the ~/.procmailrc file. For a system-wide configuration, place it in /etc/procmailrc : 15.5. Mail User Agents Red Hat Enterprise Linux offers a variety of email programs, both graphical email client programs, such as Evolution , and text-based email programs such as mutt . The remainder of this section focuses on securing communication between a client and a server. 15.5.1. Securing Communication MUAs included with Red Hat Enterprise Linux, such as Thunderbird , Evolution and Mutt offer SSL-encrypted email sessions. Like any other service that flows over a network unencrypted, important email information, such as user names, passwords, and entire messages, may be intercepted and viewed by users on the network. Additionally, since the standard POP and IMAP protocols pass authentication information unencrypted, it is possible for an attacker to gain access to user accounts by collecting user names and passwords as they are passed over the network. 15.5.1.1. Secure Email Clients Most Linux MUAs designed to check email on remote servers support SSL encryption. To use SSL when retrieving email, it must be enabled on both the email client and the server. SSL is easy to enable on the client-side, often done with the click of a button in the MUA's configuration window or via an option in the MUA's configuration file. Secure IMAP and POP have known port numbers (993 and 995, respectively) that the MUA uses to authenticate and download messages. 15.5.1.2. Securing Email Client Communications Offering SSL encryption to IMAP and POP users on the email server is a simple matter. First, create an SSL certificate. This can be done in two ways: by applying to a Certificate Authority ( CA ) for an SSL certificate or by creating a self-signed certificate. Warning Self-signed certificates should be used for testing purposes only. Any server used in a production environment should use an SSL certificate signed by a CA. To create a self-signed SSL certificate for IMAP or POP , change to the /etc/pki/dovecot/ directory, edit the certificate parameters in the /etc/pki/dovecot/dovecot-openssl.cnf configuration file as you prefer, and type the following commands, as root : Once finished, make sure you have the following configurations in your /etc/dovecot/conf.d/10-ssl.conf file: Issue the following command to restart the dovecot daemon: Alternatively, the stunnel command can be used as an encryption wrapper around the standard, non-secure connections to IMAP or POP services. The stunnel utility uses external OpenSSL libraries included with Red Hat Enterprise Linux to provide strong cryptography and to protect the network connections. It is recommended to apply to a CA to obtain an SSL certificate, but it is also possible to create a self-signed certificate. See Using stunnel in the Red Hat Enterprise Linux 7 Security Guide for instructions on how to install stunnel and create its basic configuration. To configure stunnel as a wrapper for IMAPS and POP3S , add the following lines to the /etc/stunnel/stunnel.conf configuration file: The Security Guide also explains how to start and stop stunnel . Once you start it, it is possible to use an IMAP or a POP email client and connect to the email server using SSL encryption. 15.6. Configuring Mail Server with Antispam and Antivirus Once your email delivery works, incoming emails may contain unsolicited messages also known as spam. These messages can also contain harmful viruses and malware, posing security risk and potential production loss on your systems. To avoid these risks, you can filter the incoming messages and check them against viruses by using an antispam and antivirus solution. 15.6.1. Configuring Spam Filtering for Mail Transport Agent or Mail Delivery Agent You can filter spam in a Mail Transport Agent (MTA), Mail Delivery Agent (MDA), or Mail User Agent (MUA). This chapter describes spam filtering in MTAs and MDAs. 15.6.1.1. Configuring Spam Filtering in a Mail Transport Agent Red Hat Enterprise Linux 7 offers two primary MTAs: Postfix and Sendmail. For details on how to install and configure an MTA, see Section 15.3, "Mail Transport Agents" . Stopping spam in a MTA side is possible with the use of Sendmail, which has several anti-spam features: header checks , relaying denial , access database and sender information checks . Fore more information, see Section 15.3.2.5, "Stopping Spam" . Moreover, both Postfix and Sendmail can work with third-party mail filters (milters) to filter spam and viruses in the mail-processing chain. In case of Postfix, the support for milters is included directly in the postfix package. In case of Sendmail, you need to install the sendmail-milter package, to be able to use milters. 15.6.1.2. Configuring Spam Filtering in a Mail Delivery Agent Red Hat Enterprise Linux includes two primary MDAs, Procmail and the mail utility. See Section 15.2.2, "Mail Delivery Agent" for more information. To stop spam in an MDA, users of Procmail can install third-party software named SpamAssassin available in the spamassassin package. SpamAssassin is a spam detection system that uses a variety of methods to identify spam in incoming mail. For further information on Spamassassin installation, configuration and deployment, see Section 15.4.2.6, "Spam Filters" or the How can I configure Spamassassin to filter all the incoming mail on my server? Red Hat Knowledgebase article. For additional information on SpamAssassin, see the SpamAssassin project website . Warning Note that SpamAssasin is a third-party software, and Red Hat does not support its use. The spamassassin package is available only through the Extra Packages for Enterprise Linux (EPEL) repository. To learn more about using the EPEL repository, see Section 15.6.3, "Using the EPEL Repository to install Antispam and Antivirus Software" . To learn more about how Red Hat handles the third party software and what level of support for it Red Hat provides, see How does Red Hat Global Support Services handle third-party software, drivers, and/or uncertified hardware/hypervisors or guest operating systems? Red Hat Knowledgebase article. 15.6.2. Configuring Antivirus Protection To protect your system against viruses, you can install ClamAV, an open source antivirus engine for detecting trojans, viruses, malware, and other malicious software. For additional information about ClamAV, see the ClamAV project website . Warning Note that ClamAV is a third-party software, and Red Hat does not support its use. The clamav , clamav-data , clamav-server and clamav-update packages are only available in the Extra Packages for Enterprise Linux (EPEL) repository. To learn more about using the EPEL repository, see Section 15.6.3, "Using the EPEL Repository to install Antispam and Antivirus Software" . To learn more about how Red Hat handles the third party software and what level of support for it Red Hat provides, see How does Red Hat Global Support Services handle third-party software, drivers, and/or uncertified hardware/hypervisors or guest operating systems? Red Hat Knowledgebase article. Once you have enabled the EPEL repository, install ClamAV by running the following command as the root user: 15.6.3. Using the EPEL Repository to install Antispam and Antivirus Software EPEL is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Red Hat Enterprise Linux. For more information, see the Fedora EPEL website . To use the EPEL repository, download the latest version of the epel-release package for Red Hat Enterprise Linux 7 . You can also run the following command as the root user: When using the EPEL repository for the first time, you need to authenticate with a public GPG key. For more information, see Fedora Package Signing Keys . 15.7. Additional Resources The following is a list of additional documentation about email applications. 15.7.1. Installed Documentation Information on configuring Sendmail is included with the sendmail and sendmail-cf packages. /usr/share/sendmail-cf/README - Contains information on the m4 macro processor, file locations for Sendmail, supported mailers, how to access enhanced features, and more. In addition, the sendmail and aliases man pages contain helpful information covering various Sendmail options and the proper configuration of the Sendmail /etc/mail/aliases file. /usr/share/doc/postfix- version-number / - Contains a large amount of information on how to configure Postfix. Replace version-number with the version number of Postfix. /usr/share/doc/fetchmail- version-number / - Contains a full list of Fetchmail features in the FEATURES file and an introductory FAQ document. Replace version-number with the version number of Fetchmail. /usr/share/doc/procmail- version-number / - Contains a README file that provides an overview of Procmail, a FEATURES file that explores every program feature, and an FAQ file with answers to many common configuration questions. Replace version-number with the version number of Procmail. When learning how Procmail works and creating new recipes, the following Procmail man pages are invaluable: procmail - Provides an overview of how Procmail works and the steps involved with filtering email. procmailrc - Explains the rc file format used to construct recipes. procmailex - Gives a number of useful, real-world examples of Procmail recipes. procmailsc - Explains the weighted scoring technique used by Procmail to match a particular recipe to a message. /usr/share/doc/spamassassin- version-number / - Contains a large amount of information pertaining to SpamAssassin. Replace version-number with the version number of the spamassassin package. 15.7.2. Online Documentation How to configure postfix with TLS? - A Red Hat Knowledgebase article that describes configuring postfix to use TLS. How to configure a Sendmail Smart Host - A Red Hat Knowledgebase solution that describes configuring a sendmail Smart Host. http://www.sendmail.org/ - Offers a thorough technical breakdown of Sendmail features, documentation and configuration examples. http://www.sendmail.com/ - Contains news, interviews and articles concerning Sendmail, including an expanded view of the many options available. http://www.postfix.org/ - The Postfix project home page contains a wealth of information about Postfix. The mailing list is a particularly good place to look for information. http://www.fetchmail.info/fetchmail-FAQ.html - A thorough FAQ about Fetchmail. http://www.spamassassin.org/ - The official site of the SpamAssassin project. 15.7.3. Related Books Sendmail Milters: A Guide for Fighting Spam by Bryan Costales and Marcia Flynt; Addison-Wesley - A good Sendmail guide that can help you customize your mail filters. Sendmail by Bryan Costales with Eric Allman et al.; O'Reilly & Associates - A good Sendmail reference written with the assistance of the original creator of Delivermail and Sendmail. Removing the Spam: Email Processing and Filtering by Geoff Mulligan; Addison-Wesley Publishing Company - A volume that looks at various methods used by email administrators using established tools, such as Sendmail and Procmail, to manage spam problems. Internet Email Protocols: A Developer's Guide by Kevin Johnson; Addison-Wesley Publishing Company - Provides a very thorough review of major email protocols and the security they provide. Managing IMAP by Dianna Mullet and Kevin Mullet; O'Reilly & Associates - Details the steps required to configure an IMAP server.
[ "~]# yum install dovecot", "protocols = imap pop3 lmtp", "~]# systemctl restart dovecot", "~]# systemctl enable dovecot Created symlink from /etc/systemd/system/multi-user.target.wants/dovecot.service to /usr/lib/systemd/system/dovecot.service.", "ssl_protocols = !SSLv2 !SSLv3", "ssl=required", "~]# systemctl restart dovecot", "~]# alternatives --config mta", "~]# systemctl enable service", "~]# systemctl disable service", "~]# systemctl restart postfix", "alias_maps = hash:/etc/aliases, ldap:/etc/postfix/ldap-aliases.cf", "server_host = ldap.example.com search_base = dc= example , dc= com", "~]# yum install sendmail", "~]# yum install sendmail-cf", "systemctl restart sendmail", "systemctl restart sendmail", "~]# systemctl restart sendmail", "~]# systemctl restart sendmail", "FEATURE(always_add_domain)dnl FEATURE(masquerade_entire_domain)dnl FEATURE(masquerade_envelope)dnl FEATURE(allmasquerade)dnl MASQUERADE_DOMAIN(`example.com.')dnl MASQUERADE_AS(`example.com')dnl", "systemctl restart sendmail", "~]# systemctl restart sendmail", "badspammer.com ERROR:550 \"Go away and do not spam us anymore\" tux.badspammer.com OK 10.0 RELAY", "systemctl restart sendmail", "LDAPROUTE_DOMAIN(' yourdomain.com ')dnl FEATURE('ldap_routing')dnl", "~]# yum install fetchmail", "set postmaster \"user1\" set bouncemail poll pop.domain.com proto pop3 user 'user1' there with password 'secret' is user1 here poll mail.domain2.com user 'user5' there with password 'secret2' is user1 here user 'user7' there with password 'secret3' is user1 here", "env-variable =\" value \"", "MAILDIR=USDHOME/Msgs INCLUDERC=USDMAILDIR/lists.rc INCLUDERC=USDMAILDIR/spam.rc", ":0 flags : lockfile-name * condition_1_special-condition-character condition_1_regular_expression * condition_2_special-condition-character condition-2_regular_expression * condition_N_special-condition-character condition-N_regular_expression special-action-character action-to-perform", ":0: new-mail.spool", ":0 * ^From: [email protected] /dev/null", ":0: * ^(From|Cc|To).*tux-lug tuxlug", "~]# yum install spamassassin", "INCLUDERC=/etc/mail/spamassassin/spamassassin-default.rc", "*****SPAM*****", ":0 Hw * ^X-Spam-Status: Yes spam", "~]# systemctl start spamassassin", "systemctl enable spamassassin.service", "INCLUDERC=/etc/mail/spamassassin/spamassassin-spamc.rc", "dovecot]# rm -f certs/dovecot.pem private/dovecot.pem dovecot]# /usr/libexec/dovecot/mkcert.sh", "ssl_cert = </etc/pki/dovecot/certs/dovecot.pem ssl_key = </etc/pki/dovecot/private/dovecot.pem", "~]# systemctl restart dovecot", "[pop3s] accept = 995 connect = 110 [imaps] accept = 993 connect = 143", "~]# yum install clamav clamav-data clamav-server clamav-update", "~]# yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpmzu" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/ch-mail_servers
Chapter 15. command
Chapter 15. command This chapter describes the commands under the command command. 15.1. command list List recognized commands by group Usage: Table 15.1. Command arguments Value Summary -h, --help Show this help message and exit --group <group-keyword> Show commands filtered by a command group, for example: identity, volume, compute, image, network and other keywords Table 15.2. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 15.3. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 15.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 15.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack command list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--group <group-keyword>]" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/command
20.4. volume_key References
20.4. volume_key References More information on volume_key can be found: in the readme file located at /usr/share/doc/volume_key-*/README on volume_key 's manpage using man volume_key online at http://fedoraproject.org/wiki/Disk_encryption_key_escrow_use_cases
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/volume_key-documentation
Chapter 5. Downloading the test plan from Red Hat Certification Portal
Chapter 5. Downloading the test plan from Red Hat Certification Portal Procedure Log in to Red Hat Certification portal . Search for the case number related to your product certification, and copy it. Click Cases enter the product case number. Optional: To list the components that are tested during the test run, click Test Plans . Click Download Test Plan . steps If you plan to use Cockpit to run the tests, see Configuring the systems and running tests by using Cockpit . If you plan to use CLI to run the tests, see Configuring the systems and running tests by using CLI .
null
https://docs.redhat.com/en/documentation/red_hat_hardware_certification/2025/html/red_hat_hardware_certification_test_suite_user_guide/proc_downloading-the-test-plan-from-rhcert-connect_hw-test-suite-setting-test-environment
Chapter 5. Storing Data Grid Server credentials in keystores
Chapter 5. Storing Data Grid Server credentials in keystores External services require credentials to authenticate with Data Grid Server. To protect sensitive text strings such as passwords, add them to a credential keystore rather than directly in Data Grid Server configuration files. You can then configure Data Grid Server to decrypt passwords for establishing connections with services such as databases or LDAP directories. Important Plain-text passwords in USDRHDG_HOME/server/conf are unencrypted. Any user account with read access to the host filesystem can view plain-text passwords. While credential keystores are password-protected store encrypted passwords, any user account with write access to the host filesystem can tamper with the keystore itself. To completely secure Data Grid Server credentials, you should grant read-write access only to user accounts that can configure and run Data Grid Server. 5.1. Setting up credential keystores Create keystores that encrypt credential for Data Grid Server access. A credential keystore contains at least one alias that is associated with an encrypted password. After you create a keystore, you specify the alias in a connection configuration such as a database connection pool. Data Grid Server then decrypts the password for that alias from the keystore when the service attempts authentication. You can create as many credential keystores with as many aliases as required. Note As a security best practice, keystores should be readable only by the user who runs the process for Data Grid Server. Procedure Open a terminal in USDRHDG_HOME . Create a keystore and add credentials to it with the credentials command. Tip By default, keystores are of type PKCS12. Run help credentials for details on changing keystore defaults. The following example shows how to create a keystore that contains an alias of "dbpassword" for the password "changeme". When you create a keystore you also specify a password to access the keystore with the -p argument. Linux Microsoft Windows Check that the alias is added to the keystore. Open your Data Grid Server configuration for editing. Configure Data Grid to use the credential keystore. Add a credential-stores section to the security configuration. Specify the name and location of the credential keystore. Specify the password to access the credential keystore with the clear-text-credential configuration. Note Instead of adding a clear-text password for the credential keystore to your Data Grid Server configuration you can use an external command or masked password for additional security. You can also use a password in one credential store as the master password for another credential store. Reference the credential keystore in configuration that Data Grid Server uses to connect with an external system such as a datasource or LDAP server. Add a credential-reference section. Specify the name of the credential keystore with the store attribute. Specify the password alias with the alias attribute. Tip Attributes in the credential-reference configuration are optional. store is required only if you have multiple keystores. alias is required only if the keystore contains multiple password aliases. Save the changes to your configuration. 5.2. Securing passwords for credential keystores Data Grid Server requires a password to access credential keystores. You can add that password to Data Grid Server configuration in clear text or, as an added layer of security, you can use an external command for the password or you can mask the password. Prerequisites Set up a credential keystore for Data Grid Server. Procedure Do one of the following: Use the credentials mask command to obscure the password, for example: Masked passwords use Password Based Encryption (PBE) and must be in the following format in your Data Grid Server configuration: <MASKED_VALUE;SALT;ITERATION>. Use an external command that provides the password as standard output. An external command can be any executable, such as a shell script or binary, that uses java.lang.Runtime#exec(java.lang.String) . If the command requires parameters, provide them as a space-separated list of strings. 5.3. Credential keystore configuration You can add credential keystores to Data Grid Server configuration and use clear-text passwords, masked passwords, or external commands that supply passwords. Credential keystore with a clear text password XML <server xmlns="urn:infinispan:server:15.0"> <security> <credential-stores> <credential-store name="credentials" path="credentials.pfx"> <clear-text-credential clear-text="secret1234!"/> </credential-store> </credential-stores> </security> </server> JSON { "server": { "security": { "credential-stores": [{ "name": "credentials", "path": "credentials.pfx", "clear-text-credential": { "clear-text": "secret1234!" } }] } } } YAML server: security: credentialStores: - name: credentials path: credentials.pfx clearTextCredential: clearText: "secret1234!" Credential keystore with a masked password XML <server xmlns="urn:infinispan:server:15.0"> <security> <credential-stores> <credential-store name="credentials" path="credentials.pfx"> <masked-credential masked="1oTMDZ5JQj6DVepJviXMnX;pepper99;100"/> </credential-store> </credential-stores> </security> </server> JSON { "server": { "security": { "credential-stores": [{ "name": "credentials", "path": "credentials.pfx", "masked-credential": { "masked": "1oTMDZ5JQj6DVepJviXMnX;pepper99;100" } }] } } } YAML server: security: credentialStores: - name: credentials path: credentials.pfx maskedCredential: masked: "1oTMDZ5JQj6DVepJviXMnX;pepper99;100" External command passwords XML <server xmlns="urn:infinispan:server:15.0"> <security> <credential-stores> <credential-store name="credentials" path="credentials.pfx"> <command-credential command="/path/to/executable.sh arg1 arg2"/> </credential-store> </credential-stores> </security> </server> JSON { "server": { "security": { "credential-stores": [{ "name": "credentials", "path": "credentials.pfx", "command-credential": { "command": "/path/to/executable.sh arg1 arg2" } }] } } } YAML server: security: credentialStores: - name: credentials path: credentials.pfx commandCredential: command: "/path/to/executable.sh arg1 arg2" 5.4. Credential keystore references After you add credential keystores to Data Grid Server you can reference them in connection configurations. Datasource connections XML <server xmlns="urn:infinispan:server:15.0"> <security> <credential-stores> <credential-store name="credentials" path="credentials.pfx"> <clear-text-credential clear-text="secret1234!"/> </credential-store> </credential-stores> </security> <data-sources> <data-source name="postgres" jndi-name="jdbc/postgres"> <!-- Specifies the database username in the connection factory. --> <connection-factory driver="org.postgresql.Driver" username="dbuser" url="USD{org.infinispan.server.test.postgres.jdbcUrl}"> <!-- Specifies the credential keystore that contains an encrypted password and the alias for it. --> <credential-reference store="credentials" alias="dbpassword"/> </connection-factory> <connection-pool max-size="10" min-size="1" background-validation="1000" idle-removal="1" initial-size="1" leak-detection="10000"/> </data-source> </data-sources> </server> JSON { "server": { "security": { "credential-stores": [{ "name": "credentials", "path": "credentials.pfx", "clear-text-credential": { "clear-text": "secret1234!" } }], "data-sources": [{ "name": "postgres", "jndi-name": "jdbc/postgres", "connection-factory": { "driver": "org.postgresql.Driver", "username": "dbuser", "url": "USD{org.infinispan.server.test.postgres.jdbcUrl}", "credential-reference": { "store": "credentials", "alias": "dbpassword" } } }] } } } YAML server: security: credentialStores: - name: credentials path: credentials.pfx clearTextCredential: clearText: "secret1234!" dataSources: - name: postgres jndiName: jdbc/postgres connectionFactory: driver: org.postgresql.Driver username: dbuser url: 'USD{org.infinispan.server.test.postgres.jdbcUrl}' credentialReference: store: credentials alias: dbpassword LDAP connections XML <server xmlns="urn:infinispan:server:15.0"> <security> <credential-stores> <credential-store name="credentials" path="credentials.pfx"> <clear-text-credential clear-text="secret1234!"/> </credential-store> </credential-stores> <security-realms> <security-realm name="default"> <!-- Specifies the LDAP principal in the connection factory. --> <ldap-realm name="ldap" url="ldap://my-ldap-server:10389" principal="uid=admin,ou=People,dc=infinispan,dc=org"> <!-- Specifies the credential keystore that contains an encrypted password and the alias for it. --> <credential-reference store="credentials" alias="ldappassword"/> </ldap-realm> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "credential-stores": [{ "name": "credentials", "path": "credentials.pfx", "clear-text-credential": { "clear-text": "secret1234!" } }], "security-realms": [{ "name": "default", "ldap-realm": { "name": "ldap", "url": "ldap://my-ldap-server:10389", "principal": "uid=admin,ou=People,dc=infinispan,dc=org", "credential-reference": { "store": "credentials", "alias": "ldappassword" } } }] } } } YAML server: security: credentialStores: - name: credentials path: credentials.pfx clearTextCredential: clearText: "secret1234!" securityRealms: - name: "default" ldapRealm: name: ldap url: 'ldap://my-ldap-server:10389' principal: 'uid=admin,ou=People,dc=infinispan,dc=org' credentialReference: store: credentials alias: ldappassword
[ "bin/cli.sh credentials add dbpassword -c changeme -p \"secret1234!\"", "bin\\cli.bat credentials add dbpassword -c changeme -p \"secret1234!\"", "bin/cli.sh credentials ls -p \"secret1234!\" dbpassword", "bin/cli.sh credentials mask -i 100 -s pepper99 \"secret1234!\"", "<server xmlns=\"urn:infinispan:server:15.0\"> <security> <credential-stores> <credential-store name=\"credentials\" path=\"credentials.pfx\"> <clear-text-credential clear-text=\"secret1234!\"/> </credential-store> </credential-stores> </security> </server>", "{ \"server\": { \"security\": { \"credential-stores\": [{ \"name\": \"credentials\", \"path\": \"credentials.pfx\", \"clear-text-credential\": { \"clear-text\": \"secret1234!\" } }] } } }", "server: security: credentialStores: - name: credentials path: credentials.pfx clearTextCredential: clearText: \"secret1234!\"", "<server xmlns=\"urn:infinispan:server:15.0\"> <security> <credential-stores> <credential-store name=\"credentials\" path=\"credentials.pfx\"> <masked-credential masked=\"1oTMDZ5JQj6DVepJviXMnX;pepper99;100\"/> </credential-store> </credential-stores> </security> </server>", "{ \"server\": { \"security\": { \"credential-stores\": [{ \"name\": \"credentials\", \"path\": \"credentials.pfx\", \"masked-credential\": { \"masked\": \"1oTMDZ5JQj6DVepJviXMnX;pepper99;100\" } }] } } }", "server: security: credentialStores: - name: credentials path: credentials.pfx maskedCredential: masked: \"1oTMDZ5JQj6DVepJviXMnX;pepper99;100\"", "<server xmlns=\"urn:infinispan:server:15.0\"> <security> <credential-stores> <credential-store name=\"credentials\" path=\"credentials.pfx\"> <command-credential command=\"/path/to/executable.sh arg1 arg2\"/> </credential-store> </credential-stores> </security> </server>", "{ \"server\": { \"security\": { \"credential-stores\": [{ \"name\": \"credentials\", \"path\": \"credentials.pfx\", \"command-credential\": { \"command\": \"/path/to/executable.sh arg1 arg2\" } }] } } }", "server: security: credentialStores: - name: credentials path: credentials.pfx commandCredential: command: \"/path/to/executable.sh arg1 arg2\"", "<server xmlns=\"urn:infinispan:server:15.0\"> <security> <credential-stores> <credential-store name=\"credentials\" path=\"credentials.pfx\"> <clear-text-credential clear-text=\"secret1234!\"/> </credential-store> </credential-stores> </security> <data-sources> <data-source name=\"postgres\" jndi-name=\"jdbc/postgres\"> <!-- Specifies the database username in the connection factory. --> <connection-factory driver=\"org.postgresql.Driver\" username=\"dbuser\" url=\"USD{org.infinispan.server.test.postgres.jdbcUrl}\"> <!-- Specifies the credential keystore that contains an encrypted password and the alias for it. --> <credential-reference store=\"credentials\" alias=\"dbpassword\"/> </connection-factory> <connection-pool max-size=\"10\" min-size=\"1\" background-validation=\"1000\" idle-removal=\"1\" initial-size=\"1\" leak-detection=\"10000\"/> </data-source> </data-sources> </server>", "{ \"server\": { \"security\": { \"credential-stores\": [{ \"name\": \"credentials\", \"path\": \"credentials.pfx\", \"clear-text-credential\": { \"clear-text\": \"secret1234!\" } }], \"data-sources\": [{ \"name\": \"postgres\", \"jndi-name\": \"jdbc/postgres\", \"connection-factory\": { \"driver\": \"org.postgresql.Driver\", \"username\": \"dbuser\", \"url\": \"USD{org.infinispan.server.test.postgres.jdbcUrl}\", \"credential-reference\": { \"store\": \"credentials\", \"alias\": \"dbpassword\" } } }] } } }", "server: security: credentialStores: - name: credentials path: credentials.pfx clearTextCredential: clearText: \"secret1234!\" dataSources: - name: postgres jndiName: jdbc/postgres connectionFactory: driver: org.postgresql.Driver username: dbuser url: 'USD{org.infinispan.server.test.postgres.jdbcUrl}' credentialReference: store: credentials alias: dbpassword", "<server xmlns=\"urn:infinispan:server:15.0\"> <security> <credential-stores> <credential-store name=\"credentials\" path=\"credentials.pfx\"> <clear-text-credential clear-text=\"secret1234!\"/> </credential-store> </credential-stores> <security-realms> <security-realm name=\"default\"> <!-- Specifies the LDAP principal in the connection factory. --> <ldap-realm name=\"ldap\" url=\"ldap://my-ldap-server:10389\" principal=\"uid=admin,ou=People,dc=infinispan,dc=org\"> <!-- Specifies the credential keystore that contains an encrypted password and the alias for it. --> <credential-reference store=\"credentials\" alias=\"ldappassword\"/> </ldap-realm> </security-realm> </security-realms> </security> </server>", "{ \"server\": { \"security\": { \"credential-stores\": [{ \"name\": \"credentials\", \"path\": \"credentials.pfx\", \"clear-text-credential\": { \"clear-text\": \"secret1234!\" } }], \"security-realms\": [{ \"name\": \"default\", \"ldap-realm\": { \"name\": \"ldap\", \"url\": \"ldap://my-ldap-server:10389\", \"principal\": \"uid=admin,ou=People,dc=infinispan,dc=org\", \"credential-reference\": { \"store\": \"credentials\", \"alias\": \"ldappassword\" } } }] } } }", "server: security: credentialStores: - name: credentials path: credentials.pfx clearTextCredential: clearText: \"secret1234!\" securityRealms: - name: \"default\" ldapRealm: name: ldap url: 'ldap://my-ldap-server:10389' principal: 'uid=admin,ou=People,dc=infinispan,dc=org' credentialReference: store: credentials alias: ldappassword" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_security_guide/credential-keystores
Chapter 12. Configuring manual node reboot to define KernelArgs
Chapter 12. Configuring manual node reboot to define KernelArgs Overcloud nodes are automatically rebooted when the overcloud deployment includes setting the KernelArgs for the first time. Rebooting nodes can be an issue for existing workloads if you are adding KernelArgs to a deployment that is already in production. You can disable the automatic rebooting of nodes when updating a deployment, and instead perform node reboots manually after each overcloud deployment. Note If you disable automatic reboot and then add new Compute nodes to your deployment, the new nodes will not be rebooted during their initial provisioning. This might cause deployment errors because the configuration of KernelArgs is applied only after a reboot. 12.1. Configuring manual node reboot to define KernelArgs You can disable the automatic rebooting of nodes when you configure KernelArgs for the first time, and instead reboot the nodes manually. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Enable the KernelArgsDeferReboot role parameter in a custom environment file, for example, kernelargs_manual_reboot.yaml : Add your custom environment file to the stack with your other environment files and deploy the overcloud: Retrieve a list of your Compute nodes to identify the host name of the node that you want to reboot: Disable the Compute service on the Compute node you want to reboot, to prevent the Compute scheduler from assigning new instances to the node: Replace <node> with the host name of the node you want to disable the Compute service on. Retrieve a list of the instances hosted on the Compute node that you want to migrate: Migrate the instances to another Compute node. For information on migrating instances, see Migrating virtual machine instances between Compute nodes . Log in to the node that you want to reboot. Reboot the node: Wait until the node boots. Re-enable the Compute node: Check that the Compute node is enabled:
[ "[stack@director ~]USD source ~/stackrc", "parameter_defaults: <Role>Parameters: KernelArgsDeferReboot: True", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/kernelargs_manual_reboot.yaml", "(undercloud)USD source ~/overcloudrc (overcloud)USD openstack compute service list", "(overcloud)USD openstack compute service set <node> nova-compute --disable", "(overcloud)USD openstack server list --host <node_UUID> --all-projects", "[tripleo-admin@overcloud-compute-0 ~]USD sudo reboot", "(overcloud)USD openstack compute service set <node_UUID> nova-compute --enable", "(overcloud)USD openstack compute service list" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-manual-node-reboot-to-define-kernelargs_kernelargs-manual-reboot
Chapter 4. New features
Chapter 4. New features This part describes new features and major enhancements introduced in Red Hat Enterprise Linux 8.6. 4.1. Installer and image creation Image Builder supports customized file system partition on LVM With this enhancement, if you have more than one partition, you can create images with a customized file system partition on LVM and resize those partitions at runtime. For that, you can specify a customized filesystem configuration in your blueprint and then create images with the desired disk layout. The default filesystem layout remains unchanged - if you use plain images without file system customization, the root partition is resized by cloud-init . (JIRA:RHELPLAN-102505) 4.2. RHEL for Edge RHEL for Edge now supports Greenboot built-in health checks by default With this update, RHEL for Edge Greenboot now includes built-in health checks with watchdog feature to ensure that the hardware does not hang or freeze while rebooting. With that, you can benefit from the following features: It makes it simple for watchdogs hardware users to adopt the built-in health checks A set of default health checks that provide value for built-in OS components The watchdog is now present as default presets, which makes it easy to enable or disable this feature Ability to create custom health checks based on the already available health checks. ( BZ#2083036 ) RHEL 8 rebased to rpm-ostree v2022.2 RHEL 8 is distributed with the rpm-ostree version v2022.2, which provides multiple bug fixes and enhancements. Notable changes include: Kernel arguments can now be updated in an idempotent way, by using the new --append-if-missing and --delete-if-present kargs flags. The Count Me feature from YUM is now fully disabled by default in all repo queries and will only be triggered by the corresponding rpm-ostree-countme.timer and rpm-ostree-countme.service units. See countme . The post-processing logic can now process the user.ima IMA extended attribute. When an xattr extended attribute is found, the system automatically translates it to security.ima in the final OSTree package content. The treefile file has a new repo-packages field. You can use it to pin a set of packages to a specific repository. Ability to use modularity on the compose and client side. Container images are now used as a compose target and also as an upgrade source. ( BZ#2032594 ) 4.3. Subscription management Merged system purpose commands under subscription-manager syspurpose Previously, there were multiple subscription-manager modules ( addons , role , service-level , and usage ) for setting attributes related to system purpose. These modules have been moved under the new subscription-manager syspurpose module. The original subscription-manager modules ( addons , role , service-level , and usage ) are now deprecated. Additionally, the package ( python3-syspurpose ) that provides the syspurpose command line tool has been deprecated in RHEL 8.6. All the capabilities of this package are covered by the new subscription-manager syspurpose module. This update provides a consistent way to view, set, and update all system purpose attributes using a single command of subscription-manager; this replaces all the existing system purpose commands with their equivalent versions available as a new subcommand. For example, subscription-manager role --set SystemRole becomes subscription-manager syspurpose role --set SystemRole and so on. For complete information about the new commands, options, and other attributes, see the SYSPURPOSE OPTIONS section in the subscription-manager man page. ( BZ#2000883 ) 4.4. Software management The modulesync command is now available to replace certain workflows in RHEL 8 In Red Hat Enterprise Linux 8, modular packages cannot be installed without modular metadata. Previously, you could use the yum command to download packages, and then use the createrepo_c command to redistribute those packages. This enhancement introduces the modulesync command to ensure the presence of modular metadata, which ensures package installability. This command downloads rpm packages from modules and creates a repository with modular metadata in a working directory. ( BZ#1868047 ) A new --path CLI option is added to RPM With this update, you can query packages by a file that is currently not installed using a new --path CLI option. This option is similar to the existing --file option, but matches packages solely based on the provided path. Note that the file at that path does not need to exist on disk. The --path CLI option can be useful when a user excludes all documentation files at install time by using the --nodocs option with yum . In this case, by using the --path option, you can display the owning package of such an excluded file, whereas the --file option will not display the package because the requested file does not exist. ( BZ#1940895 ) 4.5. Shells and command-line tools The lsvpd package rebased to version 1.7.13 The lsvpd package has been rebased to version 1.7.13. Notable bug fixes and enhancements include: Added support for SCSI location code. Fixed length of absolute path getDevTreePath in sysfstreecollector . (BZ#1993557) The net-snmp-cert gencert tool now uses the SHA512 encryption algorithm instead of SHA1 In order to increase security, the net-snmp-cert gencert tool has been updated to generate certificates using SHA512 encryption algorithm by default. ( BZ#1908331 ) The dnn and text modules are available in the opencv package The dnn module containing Deep Neural Networks for image classification inference and the text module for scene text detection and recognition are now available in the opencv package. ( BZ#2007780 ) The powerpc-utils package rebased to version 1.3.9 The powerpc-utils package has been upgraded to version 1.3.9. Notable bug fixes, and enhancements include: Increased log size to 1MB in drmgr . Fixed checking HCNID array size at boot time. Implemented autoconnect-slaves on HNV connections in hcnmgr . Improved the HNV bond list connections in hcnmgr . Uses hexdump from util-linux instead of xxd from vim in hcnmgr . The hcn-init.service starts together with NetworkManager. Fixed OF to logical FC lookup for multipath in ofpathname . Fixed OF to logical lookup with partitions in ofpathname . Fixed bootlist for multipath devices with more than 5 paths. Introduced lparnumascore command to detect the NUMA affinity score for the running LPAR. Added the -x option in lpartstat to enhance security. Fixed ofpathname race with udev rename in hcnmgr . Fixed qrydev in HNV, and removed lsdevinfo . (BZ#2028690) The powerpc-utils package now supports vNIC as a backup device The powerpc-utils package now supports Virtual Network Interface cards (vNIC) as a backup vdevice for Hybrid Network Virtualization (HNV). (BZ#2022225) The opencryptoki package rebased to version 3.17.0 The opencryptoki package has been rebased to version 3.17.0. Notable bug fixes and enhancements include: The p11sak tool offers a new function of listing keys. Added support for OpenSSL 3.0 . Added support for event notifications. Added SW fallbacks in ICA tokens. The WebSphere Application Server no longer fails to start with the hardware crypto adapter enabled. The opencryptoki.module was removed, and the p11-kit list-modules command no longer causes error messages. (BZ#1984993) Certain network interfaces and IP addresses can be excluded when creating a rescue image You can use the EXCLUDE_IP_ADDRESSES variable to ignore certain IP addresses, and the EXCLUDE_NETWORK_INTERFACES variable to ignore certain network interfaces when creating a rescue image. On servers with floating addresses, you need to stop the ReaR rescue environment from configuring floating addresses that are moved to a fail-over server until the original server is recovered. Otherwise, a conflict with the fail-over server would occur and cause a consequent disruption of the services running on the fail-over server. To prevent conflicts, you can perform the following actions in the ReaR configuration file /etc/rear/local.conf : exclude the IP addresses in the ReaR by providing the EXCLUDE_IP_ADDRESSES variable as a bash array of addresses. For example: EXCLUDE_IP_ADDRESSES=( 192.0.2.27 192.0.2.10 ) , exclude the network interfaces in the ReaR by providing the EXCLUDE_NETWORK_INTERFACES variable as a bash array of interfaces. For example: EXCLUDE_NETWORK_INTERFACES=( eno1d1 ) . ( BZ#2035939 ) 4.6. Infrastructure services New bind9.16 package version 9.16.23 introduced A new bind9.16 package version 9.16.23 has been introduced as an alternative to bind component version 9.11.36. Notable enhancements include: Introduced new Key and Signing Policy feature in DNSSEC. Introduced the QNAME minimisation to improve privacy. Introduced the validate-except feature to Permanent. Negative Trust Anchors to temporarily disable DNSSEC validation. Refactored the response policy zones (RPZ). Introduced new naming conventions for zone types: primary and secondary zone types are used as synonyms to master and slave . Introduced a supplementary YAML output mode of dig , mdig , and delv commands. The filter-aaaa functionality was moved into separate filter-a and filter-aaaa plugins. Introduced a new zone type mirror support ( RFC 8806 ). Removed features: The dnssec-enabled option has been removed, DNSSEC is enabled by default, and the dnssec-enabled keywords are no longer accepted. The lwresd lightweight resolver daemon, and liblwres lightweight resolver library have been removed. (BZ#1873486) CUPS is available as a container image The Common Unix Printing System (CUPS) is now available as a container image, and you can deploy it from the Red Hat Container Catalog. (BZ#1913715) The bind component rebased to version 9.11.36 The bind component has been updated to version 9.11.36. Notable bug fixes and enhancements include: Improved the lame-ttl option to be more secure. A multiple threads bug affecting RBTDB instances no longer results in assertion failure in free_rbtdb() . Updated implementation of the ZONEMD RR type to match RFC 8976. The maximum supported number of NSEC3 iterations has been reduced to 150. Records with more iterations are treated as insecure. An invalid direction field in a LOC record no longer results in a failure. ( BZ#2013993 ) CUPS driverless printing is available in CUPS Web UI CUPS driverless printing, based on the IPP Everywhere model, is available in the CUPS Web UI. In addition to the lpadmin command used in the CLI, you can create an IPP Everywhere queue in the CUPS Web UI to print to network printers without special software. ( BZ#2032965 ) 4.7. Security The pcsc-lite packages rebased to 1.9.5 The pcsc-lite packages have been rebased to upstream version 1.9.5. This update provides new enhancements and bug fixes, most notably: The pcscd daemon no longer automatically exits after inactivity when started manually. The pcsc-spy utility now supports Python 3 and a new --thread option. Performance of the SCardEndTransaction() function has been improved. The poll() function replaced the select() function, which allows file descriptor numbers higher than FD_SETSIZE . Many memory leaks and concurrency problems have been fixed. ( BZ#2014641 ) Crypto policies support diffie-hellman-group14-sha256 You can now use the diffie-hellman-group14-sha256 key exchange (KEX) algorithm for the libssh library in RHEL system-wide cryptographic policies. This update also provides parity with OpenSSH, which also supports this KEX algorithm. With this update, libssh has diffie-hellman-group14-sha256 enabled by default, but you can disable it by using a custom crypto policy. ( BZ#2023744 ) OpenSSH servers now support drop-in configuration files The sshd_config file supports the Include directive, which means you can include configuration files in another directory. This makes it easier to apply system-specific configurations on OpenSSH servers by using automation tools such as Ansible Engine. It is also more consistent with the capabilities of the ssh_config file. In addition, drop-in configuration files also make it easier to organize different configuration files for different uses, such as filter incoming connections. (BZ#1926103) sshd_config:ClientAliveCountMax=0 disables connection termination Setting the SSHD configuration option ClientAliveCountMax to 0 now disables connection termination. This aligns the behavior of this option with the upstream. As a consequence, OpenSSH no longer disconnects idle SSH users when it reaches the timeout configured by the ClientAliveInterval option. ( BZ#2015828 ) libssh rebased to 0.9.6 The libssh package has been rebased to upstream version 0.9.6. This version provides bug fixes and enhancements, most notably: Support for multiple identity files. The files are processed from the bottom to the top as listed in the ~/.ssh/config file. Parsing of sub-second times in SFTP is fixed. A regression of the ssh_channel_poll_timeout() function returning SSH_AGAIN unexpectedly is now fixed. A possible heap-buffer overflow after key re-exchange is fixed. A handshake bug when AEAD cipher is matched but there is no HMAC overlap is fixed. Several memory leaks on error paths are fixed. ( BZ#1896651 ) Libreswan rebased to 4.5 Libreswan has been rebased to upstream version 4.5. This version provides many bug fixes and enhancements, most notably: Support of Internet Key Exchange version 2 (IKEv2) for Labeled IPsec. Support for childless initiation of Internet Key Exchange (IKE) Security Association (SA). (BZ#2017352) New option to verify SELinux module checksums With the newly added --checksum option to the semodule command, you can verify the versions of installed SELinux policy modules. Because Common Intermediate Language (CIL) does not store module name and module version in the module itself, there previously was no simple way to verify that the installed module is the same version as the module which was supposed to be installed. With the new command semodule -l --checksum , you receive a SHA256 hash of the specified module and can compare it with the checksum of the original file, which is faster than reinstalling modules. Example of use: ( BZ#1731501 ) OpenSCAP can read local files OpenSCAP can now consume local files instead of remote SCAP source data stream components. Previously, you could not perform a complete evaluation of SCAP source data streams containing remote components on systems that have no internet access. On these systems, OpenSCAP could not evaluate some of the rules in these data streams because the remote components needed to be downloaded from the internet. With this update, you can download and copy the remote SCAP source data stream components to the target system before performing the OpenSCAP scan and provide them to OpenSCAP by using the --local-files option with the oscap command. ( BZ#1970529 ) SSG now scans and remediates rules for home directories and interactive users OVAL content to check and remediate all existing rules related to home directories used by interactive users was added to the SCAP Security Guide (SSG) suite. Many benchmarks require verification of properties and content usually found within home directories of interactive users. Because the existence and the number of interactive users in a system may vary, there was previously no robust solution to cover this gap using the OVAL language. This update adds OVAL checks and remediations that detect local interactive users in a system and their respective home directories. As a result, SSG can safely check and remediate all related benchmark requirements. ( BZ#1884687 ) SCAP rules now have a warning message to configure Audit log buffer for large systems The SCAP rule xccdf_org.ssgproject.content_rule_audit_basic_configuration now displays a performance warning that suggests users of large systems where the Audit log buffer configured by this rule might be too small and can override the custom value. The warning also describes the process to configure a larger Audit log buffer. With this enhancement, users of large systems can stay compliant and have their Audit log buffer set correctly. ( BZ#1993826 ) SSG now supports the /etc/security/faillock.conf file This enhancement adds support for the /etc/security/faillock.conf file in SCAP Security Guide (SSG). With this update, SSG can assess and remediate the /etc/security/faillock.conf file for definition of pam_faillock settings. The authselect tool is also used to enable the pam_faillock module while ensuring the integrity of pam files. As a result, the assessment and remediation of the pam_faillock module is aligned with the latest versions and best practices. ( BZ#1956972 ) SCAP Security Guide rebased to 0.1.60 The SCAP Security Guide (SSG) packages have been rebased to upstream version 0.1.60. This version provides various enhancements and bug fixes, most notably: Rules hardening the PAM stack now use authselect as the configuration tool. Tailoring files that define profiles which represent the differences between DISA STIG automated SCAP content and SCAP automated content (delta tailoring) are now supported. The rule xccdf_org.ssgproject.content_enable_fips_mode now checks only whether the FIPS mode has been enabled properly. It does not guarantee that system components have undergone FIPS certification. ( BZ#2014485 ) DISA STIG profile supports Red Hat Virtualization 4.4 The DISA STIG for Red Hat Enterprise Linux 8 profile version V1R5 has been enhanced to support Red Hat Virtualization 4.4. This profile aligns with the RHEL 8 Security Technical Implementation Guide (STIG) manual benchmark provided by the Defense Information Systems Agency (DISA). However, some configurations are not applied on hosts where Red Hat Virtualization (RHV) is installed because they prevent Red Hat Virtualization from installing and working properly. When the STIG profile is applied on a Red Hat Virtualization Host (RHVH), on a self-hosted install (RHELH), or on a host with RHV Manager installed, the following rules result in 'notapplicable': package_gss_proxy_removed package_krb5-workstation_removed package_tuned_removed sshd_disable_root_login sudo_remove_nopasswd sysctl_net_ipv4_ip_forward xwindows_remove_packages Warning Automatic remediation might render the system non-functional. Run the remediation in a test environment first. ( BZ#2021802 ) OpenSCAP rebased to 1.3.6 The OpenSCAP packages have been rebased to upstream version 1.3.6. This version provides various bug fixes and enhancements, most notably: You can provide local copies of remote SCAP source data stream components by using the --local-files option. OpenSCAP accepts multiple --rule arguments to select multiple rules on the command line. OpenSCAP allows skipping evaluation of some rules using the --skip-rule option. You can restrict memory consumed by OpenSCAP probes by using the OSCAP_PROBE_MEMORY_USAGE_RATIO environment variable. OpenSCAP now supports the OSBuild Blueprint as a remediation type. ( BZ#2041781 ) clevis-systemd no longer depends on nc With this enhancement, the clevis-systemd package no longer depends on the nc package. The dependency did not work correctly when used with Extra Packages for Enterprise Linux (EPEL). ( BZ#1949289 ) audit rebased to 3.0.7 The audit packages have been upgraded to version 3.0.7 which introduces many enhancements and bug fixes. Most notably: Added sudoers to Audit base rules. Added the --eoe-timeout option to the ausearch command and its analogous eoe_timeout option to auditd.conf file that specifies the value for end of event timeout, which impacts how ausearch parses co-located events. Introduced a fix for the 'audisp-remote' plugin that used 100% of CPU capacity when the remote location was not available. ( BZ#1939406 ) Audit now provides options for specifying the end of the event timeout With this release, the ausearch tool supports the --eoe-timeout option, and the auditd.conf file contains the end_of_event_timeout option. You can use these options to specify the end of the event timeout to avoid problems with parsing co-located events. The default value for the end of the event timeout is set to two seconds. ( BZ#1921658 ) Adding sudoers to Audit base rules With this enhancement, the /etc/sudoers and the etc/sudoers.d/ directories are added to Audit base rules such as the Payment Card Industry Data Security Standard (PCI DSS) and the Operating Systems Protection Profile (OSPP). This increases the security by monitoring configuration changes in privileged areas such as sudoers . (BZ#1927884) Rsyslog includes the mmfields module for higher-performance operations and CEF Rsyslog now includes the rsyslog-mmfields subpackage which provides the mmfields module. This is an alternative to using the property replacer field extraction, but in contrast to the property replacer, all fields are extracted at once and stored inside the structured data part. As a result, you can use mmfields particularly for processing field-based log formats, for example Common Event Format (CEF), and if you need a large number of fields or reuse specific fields. In these cases, mmfields has better performance than existing Rsyslog features. ( BZ#1947907 ) libcap rebased to version 2.48 The libcap packages have been upgraded to upstream version 2.48, which provides a number of bug fixes and enhancements over the version, most notably: Helper library for POSIX semantic system calls ( libpsx ) Support for overriding system call functions IAB abstraction for capability sets Additional capsh testing features ( BZ#2032813 ) fapolicyd rebased to 1.1 The fapolicyd packages have been upgraded to the upstream version 1.1, which contains many improvements and bug fixes. Most notable changes include the following: The /etc/fapolicyd/rules.d/ directory for files containing allow and deny execution rules replaces the /etc/fapolicyd/fapolicyd.rules file. The fagenrules script now merges all component rule files in this directory to the /etc/fapolicyd/compiled.rules file. See the new fagenrules(8) man page for more details. In addition to the /etc/fapolicyd/fapolicyd.trust file for marking files outside of the RPM database as trusted, you can now use the new /etc/fapolicyd/trust.d directory, which supports separating a list of trusted files into more files. You can also add an entry for a file by using the fapolicyd-cli -f subcommand with the --trust-file directive to these files. See the fapolicyd-cli(1) and fapolicyd.trust(13) man pages for more information. The fapolicyd trust database now supports white spaces in file names. fapolicyd now stores the correct path to an executable file when it adds the file to the trust database. ( BZ#1939379 ) libseccomp rebased to 2.5.2 The libseccomp packages have been rebased to upstream version 2.5.2. This version provides bug fixes and enhancements, most notably: Updated the syscall table for Linux to version v5.14-rc7 . Added the get_notify_fd() function to the Python bindings to get the notification file descriptor. Consolidated multiplexed syscall handling for all architectures into one location. Added multiplexed syscall support to the PowerPC (PPC) and MIPS architectures. Changed the meaning of the SECCOMP_IOCTL_NOTIF_ID_VALID operation within the kernel. Changed the libseccomp file descriptor notification logic to support the kernel's and new usage of SECCOMP_IOCTL_NOTIF_ID_VALID . ( BZ#2019893 ) 4.8. Networking CleanUpModulesOnExit firewalld global configuration option is now available Previously, when restarting or otherwise shutting down firewalld , firewalld recursively unloaded kernel modules. As a result, other packages attempting to use these modules or dependent modules would fail. With this upgrade, users can set the CleanUpModulesOnExit option to no to stop firewalld from unloading these kernel modules. (BZ#1980206) Restoring large nftables sets requires less memory With this enhancement, the nftables framework requires significantly less memory when you restore large sets. The algorithm which prepares the netlink message has been improved, and, as a result, restoring a set can use up to 40% less memory. ( BZ#2047821 ) The nmstate API now supports OVS-DPDK This enhancement adds the schema for the Open vSwitch (OVS) Data Plane Development Kit (DPDK) to the nmstate API. As a result, you can use nmstate to configure OVS devices with DPDK ports. ( BZ#2003976 ) The nmstate API now supports VLAN and QoS ID in SR-IOV virtual functions This update enhances the nmstate API with support for local area network (VLAN) and quality of service (QoS) in single root I/O virtualization (SR-IOV) virtual functions. As a result, you can use nmstate to configure these features. ( BZ#2004006 ) NetworkManager rebased to version 1.36.0 The NetworkManager packages have been upgraded to upstream version 1.36.0, which provides a number of enhancements and bug fixes over the version: The handling of layer 3 configurations has been reworked to improve the stability, performance, and memory usage. NetworkManager now supports the rd.znet_ifnames kernel command line option on the IBM Z platform. The blackhole , unreachable , and prohibit route types have been added. NetworkManager now ignores routes managed by routing services. The Wi-Fi Protected Access version 3 (WPA3) network security has been improved by enabling the hash-to-element (H2E) method when generating simultaneous authentication of equals (SAE) password elements. The service now correctly handles replies from DHCP servers that send duplicate address or mask options. You can now turn off MAC aging on bridges. NetworkManager no longer listens for netlink events for traffic control objects, such as qdiscs and filters . Network bonds now support setting a queue ID for bond ports. For further information about notable changes, read the upstream release notes: NetworkManager 1.36.0 NetworkManager 1.34.0 ( BZ#1996617 ) The hostapd package has been added to RHEL 8.6 With this release, RHEL provides the hostapd package. However, Red Hat supports hostapd only to set up a RHEL host as an 802.1X authenticator in Ethernet networks. Other scenarios, such as Wi-Fi access points or authenticators in Wi-Fi networks, are not supported. For details about configuring RHEL as an 802.1X authenticator with a FreeRADIUS back end, see Setting up an 802.1x network authentication service for LAN clients using hostapd with FreeRADIUS backend . (BZ#2016946) NetworkManager now supports setting the number of receiving queues ( rx_queue ) on OVS-DPDK interfaces With this enhancement, you can use NetworkManager to configure the n_rxq setting of Open vSwitch (OVS) Data Plane Development Kit (DPDK) interfaces. Use the ovs-dpdk.n-rxq attribute in NetworkManager to set the number of receiving queues on OVS-DPDK interfaces. For example, to configure 2 receiving queues in OVS interface named ovs-iface0 , enter: ( BZ#2001563 ) The nftables framework now supports nft set elements with attached counters Previously, in the netfilter framework, nftables set counters were not supported. The nftables framework is configurable by the nft tool. The kernel allows this tool to count the network packets from a given source address with a statement add @myset {ip saddr counter} . In this update, you can count packets that match a specific criteria with a dynamic set and elements with attached counters. (BZ#1983635) The nispor packages are now fully supported The nispor packages, previously available as a Technology Preview, are now fully supported. This enhancement adds support for NetStateFilter to use the kernel filter on network routes and interfaces. With this release, the nispor packages single Root Input and Output Virtualization (SR-IOV) interfaces can query SR-IOV Virtual Function (SR-IOV VF) information per (VF), support new bonding options: lacp_active , arp_missed_max , and ns_ip6_target . (BZ#1848817) 4.9. Kernel Kernel version in RHEL 8.6 Red Hat Enterprise Linux 8.6 is distributed with the kernel version 4.18.0-372. See also Important changes to external kernel parameters and Device Drivers . ( BZ#1839151 ) Extended Berkeley Packet Filter for RHEL 8.6 The Extended Berkeley Packet Filter (eBPF) is an in-kernel virtual machine that allows code execution in the kernel space, in the restricted sandbox environment with access to a limited set of functions. The virtual machine executes a special assembly-like code. The eBPF bytecode first loads to the kernel, followed by its verification, code translation to the native machine code with just-in-time compilation, and then the virtual machine executes the code. Red Hat ships numerous components that utilize the eBPF virtual machine. Each component is in a different development phase, and thus not all components are currently fully supported. In RHEL 8.6, the following eBPF components are supported: The BPF Compiler Collection (BCC) tools package, which provides tools for I/O analysis, networking, and monitoring of Linux operating systems using eBPF . The BCC library which allows the development of tools similar to those provided in the BCC tools package. The eBPF for Traffic Control (tc) feature, which enables programmable packet processing inside the kernel network data path. The bpftrace tracing language The eXpress Data Path (XDP) feature, which provides access to received packets before the kernel networking stack processes them, is supported under specific conditions. For more information see, XDP is conditionally supported and Overview of networking eBPF features in RHEL . The libbpf package, which is crucial for bpf related applications like bpftrace and bpf/xdp development. The xdp-tools package, which contains userspace support utilities for the XDP feature, is now supported on the AMD and Intel 64-bit architectures. This includes the libxdp library, the xdp-loader utility for loading XDP programs, the xdp-filter example program for packet filtering, and the xdpdump utility for capturing packets from a network interface with XDP enabled. Note that all other eBPF components are available as Technology Preview, unless a specific component is indicated as supported. The following notable eBPF components are currently available as Technology Preview: The AF_XDP socket for connecting the eXpress Data Path (XDP) path to user space For more information regarding the Technology Preview components, see eBPF available as a Technology Preview . ( BZ#1780124 ) Red Hat, by default, enables eBPF in all RHEL versions for privileged users only Extended Berkeley Packet Filter ( eBPF ) is a complex technology which allows users to execute custom code inside the Linux kernel. Due to its nature, the eBPF code needs to pass through the verifier and other security mechanisms. There were Common Vulnerabilities and Exposures (CVE) instances, where bugs in this code could be misused for unauthorized operations. To mitigate this risk, Red Hat by default enabled eBPF in all RHEL versions for privileged users only. It is possible to enable eBPF for unprivileged users by using the kernel.command-line parameter unprivileged_bpf_disabled=0 . However, note that: Applying unprivileged_bpf_disabled=0 disqualifies your kernel from Red Hat support and opens your system to security risks. Red Hat urges you to treat processes with the CAP_BPF capability as if the capability was equal to CAP_SYS_ADMIN . Setting unprivileged_bpf_disabled=0 will not be sufficient to execute many BPF programs by unprivileged users as loading of most BPF program types requires additional capabilities (typically CAP_SYS_ADMIN or CAP_PERFMON ). For information on how to apply kernel command-line parameters, see Configuring kernel command-line parameters . (BZ#2089409) The osnoise and timerlat tracers were added in RHEL 8 The osnoise tracer measures operating system noise. That is, the interruptions of applications by the OS and hardware interrupts. It also provides a set of tracepoints to help find the source of the OS noise. The timerlat tracer measures the wakeup latencies and helps to identify the causes of such latencies of real-time (RT) threads. In RT computing, latency is absolutely crucial and even a minimal delay can be detrimental. The osnoise and timerlat tracers enable you to investigate and find causes of OS interference with applications and wakeup delay of RT threads. (BZ#1979382) The strace utility can now display mismatches between the actual SELinux contexts and the definitions extracted from the SELinux context database An existing --secontext option of strace has been extended with the mismatch parameter. This parameter enables to print the expected context along with the actual one upon mismatch only. The output is separated by double exclamation marks ( !! ), first the actual context, then the expected one. In the examples below, the full,mismatch parameters print the expected full context along with the actual one because the user part of the contexts mismatches. However, when using a solitary mismatch , it only checks the type part of the context. The expected context is not printed because the type part of the contexts matches. SELinux context mismatches often cause access control issues associated with SELinux. The mismatches printed in the system call traces can significantly expedite the checks of SELinux context correctness. The system call traces can also explain specific kernel behavior with respect to access control checks. ( BZ#2038992 , BZ#2038810 ) The --cyclictest-threshold option has been added to the rteval utility With this enhancement, the --cyclictest-threshold=USEC option has been added to the rteval test suite. Using this option you can specify a threshold value. The rteval test run ends immediately if any latency measurements exceed this threshold value. When latency expectations are not met, the run aborts with a failure status. ( BZ#2012285 ) 4.10. File systems and storage RHEL 8.6 is compatible with RHEL 9 XFS images With this update, RHEL 8.6 is now able to use RHEL 9 XFS images. RHEL 9 XFS guest images must have bigtime and inode btree counters ( inobtcount ) on-disk capabilities allowed in order to mount the guest image with RHEL 8.6. Note that file systems created with bigtime and inobtcount features are not compatible with versions earlier than RHEL 8.6. (BZ#2022903, BZ#2024201 ) Options in Samba utilities have been renamed and removed for a consistent user experience The Samba utilities have been improved to provide a consistent command-line interface. These improvements include renamed and removed options. Therefore, to avoid problems after the update, review your scripts that use Samba utilities, and update them, if necessary. Samba 4.15 introduces the following changes to the Samba utilities: Previously, Samba command-line utilities silently ignored unknown options. To prevent unexpected behavior, the utilities now consistently reject unknown options. Several command-line options now have a corresponding smb.conf variable to control their default value. See the man pages of the utilities to identify if a command-line option has an smb.conf variable name. By default, Samba utilities now log to standard error ( stderr ). Use the --debug-stdout option to change this behavior. The --client-protection=off|sign|encrypt option has been added to the common parser. The following options have been renamed in all utilities: --kerberos to --use-kerberos=required|desired|off --krb5-ccache to --use-krb5-ccache= CCACHE --scope to --netbios-scope= SCOPE --use-ccache to --use-winbind-ccache The following options have been removed from all utilities: -e and --encrypt -C removed from --use-winbind-ccache -i removed from --netbios-scope -S and --signing To avoid duplicate options, certain options have been removed or renamed from the following utilities: ndrdump : -l is no longer available for --load-dso net : -l is no longer available for --long sharesec : -V is no longer available for --viewsddl smbcquotas : --user has been renamed to --quota-user nmbd : --log-stdout has been renamed to --debug-stdout smbd : --log-stdout has been renamed to --debug-stdout winbindd : --log-stdout has been renamed to --debug-stdout ( BZ#2062117 ) Compiler barrier changed to static inline function compiler_barrier to avoid name conflict with function pointers This enhancement provides additional features and a patch for a potential data corruption bug. The compiler barrier is now set to a static inline function compiler_barrier . No name conflict occurs with the hardware store barrier, when implementing hardware fencing for non-temporal memcpy variants, while using a function pointer. As a result, RHEL 8.6 now includes pmdk version 1.11.1. (BZ#2009889) 4.11. High availability and clusters The pcmk_delay_base parameter may now take different values for different nodes When configuring a fence device, you now can specify different values for different nodes with the pcmk_delay_base parameter . This allows a single fence device to be used in a two-node cluster, with a different delay for each node. This helps prevent a situation where each node attempts to fence the other node at the same time. To specify different values for different nodes, you map the host names to the delay value for that node using a similar syntax to pcmk_host_map. For example, node1:0;node2:10s would use no delay when fencing node1 and a 10-second delay when fencing node2. ( BZ#1082146 ) Specifying automatic removal of location constraint following resource move When you execute the pcs resource move command, this adds a constraint to the resource to prevent it from running on the node on which it is currently running. A new --autodelete option for the pcs resource move command, previously available as a Technology Preview, is now fully supported. When you specify this option, the location constraint that the command creates is automatically removed once the resource has been moved. (BZ#1990784) Detailed Pacemaker status display for internal errors If Pacemaker can not execute a resource or fence agent for some reason, for example the agent is not installed or there has been an internal timeout, the Pacemaker status displays now show a detailed exit reason for the internal error. (BZ#1470834) Support for special characters inside pcmk_host_map values The pcmk_host_map property now supports special characters inside pcmk_host_map values using a backslash (\) in front of the value. For example, you can specify pcmk_host_map="node3:plug\ 1" to include a space in the host alias. ( BZ#1376538 ) pcs suppport for OCF Resource Agent API 1.1 standard The pcs command-line interface now supports OCF 1.1 resource and STONITH agents. An OCF 1.1 agent's metadata must comply with the OCF 1.1 schema. If an OCF 1.1 agent's metadata does not comply with the OCF 1.1 schema, pcs considers the agent invalid and will not create or update a resource of the agent unless the --force option is specified. The pcsd Web UI and pcs commands for listing agents omit OCF 1.1 agents with invalid metadata from the listing. An OCF agent that declares that it implements any OCF version other than 1.1, or does not declare a version at all, is validated against the OCF 1.0 schema. Validation issues are reported as warnings, but for those agents it is not necessary to specify the --force option when creating or updating a resource of the agent. ( BZ#1936833 ) New fencing agent for OpenShift The fence_kubevirt fencing agent is now available for use with RHEL High Availability on Red Hat OpenShift Virtualization. For information on the fence_kubevirt agent, see the fence_kubevirt (8) man page. ( BZ#1977588 ) 4.12. Dynamic programming languages, web and database servers A new module stream: php:8.0 RHEL 8.6 adds PHP 8.0 , which provides a number of bug fixes and enhancements over version 7.4 Notable enhancements include: New named arguments are order-independent and self-documented, and enable you to specify only required parameters. New attributes enable you to use structured metadata with PHP's native syntax. New union types enable you to use native union type declarations that are validated at runtime instead of PHPDoc annotations for a combination of types. Internal functions now more consistently raise an Error exception instead of warnings if parameter validation fails. The Just-In-Time compilation has improved the performance. The Xdebug debugging and productivity extension for PHP has been updated to version 3. This version introduces major changes in functionality and configuration compared to Xdebug 2 . To install the php:8.0 module stream, use: If you want to upgrade from the php:7.4 stream, see Switching to a later stream . For details regarding PHP usage on RHEL 8, see Using the PHP scripting language . (BZ#1978356, BZ#2027285) A new module stream: perl:5.32 RHEL 8.6 introduces Perl 5.32 , which provides a number of bug fixes and enhancements over Perl 5.30 distributed in RHEL 8.3. Notable enhancement include: Perl now supports unicode version 13.0. The qr qoute-like operator has been enhanced. The POSIX::mblen() , mbtowc , and wctomb functions now work on shift state locales and are thread-safe on C99 and above compilers when executed on a platform that has locale thread-safety; the length parameters are now optional. The new experimental isa infix operator tests whether a given object is an instance of a given class or a class derived from it. Alpha assertions are no longer experimental. Script runs are no longer experimental. Feature checks are now faster. Perl can now dump compiled patterns before optimization. To upgrade from an earlier perl module stream, see Switching to a later stream . ( BZ#2021471 ) A new package: nginx-mod-devel A new nginx-mod-devel package has been added to the nginx:1.20 module stream. The package provides all necessary files, including RPM macros and nginx source code, for building external dynamic modules for nginx . ( BZ#1991787 ) MariaDB Galera now includes an upstream version of the garbd systemd service and a wrapper script MariaDB 10.3 and MariaDB 10.5 in RHEL 8 include a Red Hat version of garbd systemd service and a wrapper script for the galera package in the /usr/lib/systemd/system/garbd.service and /usr/sbin/garbd-wrapper files, respectively. In addition to the Red Hat version of these files, RHEL 8 now also provides an upstream version. The upstream files are located at /usr/share/doc/galera/garb-systemd and /usr/share/doc/galera/garbd.service . RHEL 9 provides only the upstream version of these files, located at /usr/lib/systemd/system/garbd.service and /usr/sbin/garb-systemd . ( BZ#2042306 , BZ#2042298 , BZ#2050543 , BZ#2050546 ) 4.13. Compilers and development tools New command for capturing glibc optimization data The new ld.so --list-diagnostics command captures data that influences glibc optimization decisions, such as IFUNC selection and glibc-hwcaps configuration, in a single machine-readable file. ( BZ#2023420 ) glibc string functions are now optimized for Fujitsu A64FX With this update, glibc string functions exhibit increased throughput and reduced latency on A64FX CPUs. (BZ#1929928) New UTF-8 locale en_US@ampm with 12-hour clock With this update, you can now use a new UTF-8 locale en_US@ampm with a 12-hour clock. This new locale can be combined with other locales by using the LC_TIME environment variable. ( BZ#2000374 ) New location for libffi 's self-modifying code With this update libffi 's self-modifying code takes advantage of a feature in the RHEL 8 kernel to create a suitable file independent of any file system. As a result, libffi 's self-modifying code no longer depends on making part of the filesystem insecure. ( BZ#1875340 ) Updated GCC Toolset 11 GCC Toolset 11 is a compiler toolset that provides recent versions of development tools. It is available as an Application Stream in the form of a Software Collection in the AppStream repository. Notable changes introduced with RHEL 8.6 include: The GCC compiler has been updated to version 11.2.1. annobin has been updated to version 10.23. The following tools and versions are provided by GCC Toolset 10: Tool Version GCC 11.2.1 GDB 10.2 Valgrind 3.17.0 SystemTap 4.5 Dyninst 11.0.0 binutils 2.36.1 elfutils 0.185 dwz 0.14 make 4.3 strace 5.13 ltrace 0.7.91 annobin 10.23 To install GCC Toolset 11, run the following command as root: To run a tool from GCC Toolset 11: To run a shell session where tool versions from GCC Toolset 11 override system versions of these tools: For more information about usage, see Using GCC Toolset . The GCC Toolset 11 components are available in the two container images: rhel8/gcc-toolset-11-toolchain , which includes the GCC compiler, the GDB debugger, and the make automation tool. rhel8/gcc-toolset-11-perftools , which includes the performance monitoring tools, such as SystemTap and Valgrind. To pull a container image, run the following command as root: Note that only the GCC Toolset 11 container images are now supported. Container images of earlier GCC Toolset versions are deprecated. For details regarding the container images, see Using the GCC Toolset container images . ( BZ#1996862 ) GDB disassembler now supports the new arch14 instructions With this update, GDB is able to disassemble new arch14 instructions. (BZ#2012818) LLVM Toolset rebased to version 13.0.1 LLVM Toolset has been upgraded to version 13.0.1. Notable changes include: Clang now supports guaranteed tail calls with statement attributes [[clang::musttail]] in C++ and __attribute__((musttail)) in C. Clang now supports the -Wreserved-identifier warning, which warns developers when using reserved identifiers in their code. Clang's -Wshadow flag now also checks for shadowed structured bindings. Clang's -Wextra now also implies Wnull-pointer-subtraction . (BZ#2001133) Rust Toolset rebased to 1.58.1 The Rust Toolset has been rebased to version 1.58.1. Notable changes include: The Rust compiler now supports the 2021 edition of the language, featuring disjoint capture in closure, IntoIterator for arrays, a new Cargo feature resolver, and more. Added Cargo support for new custom profiles. Cargo deduplicates compiler errors. Added new open range patterns. Added captured identifiers in format strings. For further information, see: Rust 1.55 Rust 1.56 Rust 1.57 Rust 1.58 (BZ#2002883) Go Toolset rebased to version 1.17.7 Go Toolset has been upgraded to version 1.17.7. Notable changes include: Added an option to convert slices to array pointers. Added support for //go:build lines. Improvements to function call performance on amd64. Function arguments are formatted more clearly in stack traces. Functions containing closures can be inlined. Reduced resource consumption in x509 certificate parsing. (BZ#2014088) pcp rebased to 5.3.5 The pcp package has been rebased to version 5.3.5. Notable changes include: Added new pmieconf(1) rules for CPU and disk saturation. Improved stability and scalability of pmproxy(1) service. Improved service latency and robustness of pmlogger(1) service. Added new performance metrics related to electrical power. Added new features in the pcp-htop(1) utility. Added new features in the pcp-atop(1) utility. Updated Nvidia GPU metrics. Added new Linux kernel KVM and networking metrics. Added a new MongoDB metrics agent. Added a new sockets metrics agent and pcp-ss(1) utility. Disabled pmcd(1) and pmproxy(1) Avahi service advertising by default. ( BZ#1991763 ) The grafana package rebased to version 7.5.11 The grafana package has been rebased to version 7.5.11. Notable changes include: Added a new prepare time series transformation for backward compatibility of panels that do not support the new data frame format. ( BZ#1993214 ) grafana-pcp rebased to 3.2.0 The grafana-pcp package has been rebased to version 3.2.0. Notable changes include: Added a new MS SQL server dashboard for PCP Redis. Added visibility of empty histogram buckets in the PCP Vector eBPF/BCC Overview dashboard. Fixed a bug where the metric() function of PCP Redis did not return all metric names. ( BZ#1993149 ) js-d3-flame-graph rebased to 4.0.7 The js-d3-flame-graph package has been rebased to version 4.0.7. Notable changes include: Added new blue and green color scheme. Added functionality to display flame graph context. ( BZ#1993194 ) Power consumption metrics now available in PCP The new pmda-denki Performance Metrics Domain Agent (PMDA) reports metrics related to power consumption. Specifically, it reports: Consumption metrics based on Running Average Power Limit (RAPL) readings, available on recent Intel CPUs Consumption metrics based on battery discharge, available on systems which have a battery (BZ#1629455) A new module: log4j:2 A new log4j:2 module is now available in the AppStream repository. This module contains Apache Log4j 2 , which is a Java logging utility and a library enabling you to output log statements to a variety of output targets. Log4j 2 provides significant improvements over Log4j 1 . Notably, Log4j 2 introduces enhancements to the Logback framework and fixes some inherent problems in the Logback architecture. To install the log4j:2 module stream, use: (BZ#1937468) 4.14. Identity Management ansible-freeipa is now available in the AppStream repository with all dependencies Previously in RHEL 8, before installing the ansible-freeipa package, you first had to enable the Ansible repository and install the ansible package. In RHEL 8.6 and RHEL 9, you can install ansible-freeipa without any preliminary steps. Installing ansible-freeipa automatically installs the ansible-core package, a more basic version of ansible , as a dependency. Both ansible-freeipa and ansible-core are available in the rhel-9-for-x86_64-appstream-rpms repository. ansible-freeipa in RHEL 8.6 and RHEL 9 contains all the modules that it contained in RHEL 8. (JIRA:RHELPLAN-100359) IdM now supports the automountlocation , automountmap , and automountkey Ansible modules With this update, the ansible-freeipa package contains the ipaautomountlocation , ipaautomountmap , and ipaautomountkey modules. You can use these modules to configure directories to be mounted automatically for IdM users logged in to IdM clients in an IdM location. Note that currently, only direct maps are supported. (JIRA:RHELPLAN-79161) The support for managing subID ranges is available in the shadow-utils Previously, shadow-utils configured the subID ranges automatically from the /etc/subuid and /etc/subgid files. With this update, the configuration of subID ranges is available in the /etc/nsswitch.conf file by setting a value in the subid field. For more information, see man subuid and man subgid . Also, with this update, an SSSD implementation of the shadow-utils plugin is available, which provides the subID ranges from the IPA server. To use this functionality, add the subid: sss value to the /etc/nsswitch.conf file. This solution might be useful in the containerized environment to facilitate rootless containers. Note that in case the /etc/nsswitch.conf file is configured by the authselect tool, you must follow the procedures described in the authselect documentation. When it is not the case, you can modify the /etc/nsswitch.conf file manually. (JIRA:RHELPLAN-103579) An alternative to the traditional RHEL ansible-freeipa repository: Ansible Automation Hub With this update, you can download ansible-freeipa modules from the Ansible Automation Hub (AAH) instead of downloading them from the standard RHEL repository. By using AAH, you can benefit from the faster updates of the ansible-freeipa modules available in this repository. In AAH, ansible-freeipa roles and modules are distributed in the collection format. Note that you need an Ansible Automation Platform (AAP) subscription to access the content on the AAH portal. You also need ansible version 2.9 or later. The redhat.rhel_idm collection has the same content as the traditional ansible-freeipa package. However, the collection format uses a fully qualified collection name (FQCN) that consists of a namespace and the collection name. For example, the redhat.rhel_idm.ipadnsconfig module corresponds to the ipadnsconfig module in ansible-freeipa provided by a RHEL repository. The combination of a namespace and a collection name ensures that the objects are unique and can be shared without any conflicts. (JIRA:RHELPLAN-103147) ansible-freeipa modules can now be executed remotely on IdM clients Previously, ansible-freeipa modules could only be executed on IdM servers. This required your Ansible administrator to have SSH access to your IdM server, causing a potential security threat. With this update, you can execute ansible-freeipa modules remotely on systems that are IdM clients. As a result, you can manage IdM configuration and entities in a more secure way. To execute ansible-freeipa modules on an IdM client, choose one of the following options: Set the hosts variable of the playbook to an IdM client host. Add the ipa_context: client line to the playbook task that uses the ansible-freeipa module. You can set the ipa_context variable to client on an IdM server, too. However, the server context usually provides better performance. If ipa_context is not set, ansible-freeipa checks if it is running on a server or a client, and sets the context accordingly. Note that executing an ansible-freeipa module with context set to server on an IdM client host raises an error of missing libraries . (JIRA:RHELPLAN-103146) The ipadnsconfig module now requires action: member to exclude a global forwarder With this update, excluding global forwarders in Identity Management (IdM) by using the ansible-freeipa ipadnsconfig module requires using the action: member option in addition to the state: absent option. If you only use state: absent in your playbook without also using action: member , the playbook fails. Consequently, to remove all global forwarders, you must specify all of them individually in the playbook. In contrast, the state: present option does not require action: member . ( BZ#2046325 ) Identity Management now supports SHA384withRSA signing by default With this update, the Certificate Authority (CA) in IdM supports the SHA-384 With RSA Encryption signing algorithm. SHA384withRSA is compliant with the Federal Information Processing Standard (FIPS). ( BZ#1731484 ) SSSD default SSH hashing value is now consistent with the OpenSSH setting The default value of ssh_hash_known_hosts has been changed to false. It is now consistent with the OpenSSH setting, which does not hash host names by default. However, if you need to continue to hash host names, add ssh_hash_known_hosts = True to the [ssh] section of the /etc/sssd/sssd.conf configuration file. ( BZ#2015070 ) samba rebased to version 4.15.5 The samba packages have been upgraded to upstream version 4.15.5, which provides bug fixes and enhancements over the version: Options in Samba utilities have been renamed and removed for a consistent user experience Server multi-channel support is now enabled by default. The SMB2_22 , SMB2_24 , and SMB3_10 dialects, which were only used by Windows technical previews, have been removed. Back up the database files before starting Samba. When the smbd , nmbd , or winbind services start, Samba automatically updates its tdb database files. Note that Red Hat does not support downgrading tdb database files. After updating Samba, verify the /etc/samba/smb.conf file using the testparm utility. For further information about notable changes, read the upstream release notes before updating. ( BZ#2013596 ) Directory Server rebased to version 1.4.3.28 The 389-ds-base packages have been upgraded to upstream version 1.4.3, which provides a number of bug fixes and enhancements over the version: A potential deadlock in replicas has been fixed. The server no longer terminates unexpectedly when the dnaInterval is set to 0 . The performance of connection handling has been improved. Improved performance of targetfilter in access control instructions (ACI). ( BZ#2016014 ) Directory Server now stores memory-mapped files of databases on a tmpfs file system In Directory Server, the nsslapd-db-home-directory parameter defines the location of memory-mapped files of databases. This enhancement changes the default value of the parameter from /var/lib/dirsrv/slapd- instance_name /db/ to /dev/shm/ . As a result, with the internal databases stored on a tmpfs file system, the performance of Directory Server increases. ( BZ#1780842 ) 4.15. Desktop Security classification banners at login and in the desktop session You can now configure classification banners to state the overall security classification level of the system. This is useful for deployments where the user must be aware of the security classification level of the system that they are logged into. The classification banners can appear in the following contexts, depending on your configuration: Within the running session On the lock screen On the login screen The classification banners can take the form of either a notification that you can dismiss, or a permanent banner. For more information, see Displaying the system security classification . ( BZ#1751336 ) 4.16. Graphics infrastructures Intel Alder Lake-P GPUs are now supported This release adds support for the Intel Alder Lake-P CPU microarchitecture with integrated graphics. This includes Intel UHD Graphics and Intel Xe integrated GPUs found with the following CPU models: Intel Core i7-1280P Intel Core i7-1270P Intel Core i7-1260P Intel Core i5-1250P Intel Core i5-1240P Intel Core i3-1220P Support for Alder Lake-P graphics is disabled by default. To enable it, add the following option to the kernel command line: Replace PCI_ID with either the PCI device ID of your Intel GPU, or with the * character to enable support for all alpha-quality hardware that uses the i915 driver. (BZ#1964761) 4.17. The web console Smart card authentication for sudo and SSH from the web console Previously, it was not possible to use smart card authentication to obtain sudo privileges or use SSH in the web console. With this update, Identity Management users can use a smart card to gain sudo privileges or to connect to a different host with SSH. Note It is only possible to use one smart card to authenticate and gain sudo privileges. Using a separate smart card for sudo is not supported. (JIRA:RHELPLAN-95126) RHEL web console provides Insights registration by default With this update, when you use the Red Hat Enterprise Linux web console to register a RHEL system, the Connect this system to Red Hat Insights. check box is checked by default. If you do not want to connect to the Insights service, uncheck the box. ( BZ#2049441 ) Cockpit now supports using an existing TLS certificate With this enhancement, the certificate does not have strict file permission requirements any more (such as root:cockpit-ws 0640 ), and thus it can be shared with other services. (JIRA:RHELPLAN-103855) 4.18. Red Hat Enterprise Linux system roles The Firewall RHEL system role has been added in RHEL 8 The rhel-system-roles.firewall RHEL system role was added to the rhel-system-roles package. As a result, administrators can automate their firewall settings for managed nodes. (BZ#1854988) Full Support for HA Cluster RHEL system role The High Availability Cluster (HA Cluster) role, previously available as a Technology Preview, is now fully supported. The following notable configurations are available: Configuring fence devices, resources, resource groups, and resource clones including meta attributes and resource operations Configuring resource location constraints, resource colocation constraints, resource order constraints, and resource ticket constraints Configuring cluster properties Configuring cluster nodes, custom cluster names and node names Configuring multi-link clusters Configuring whether clusters start automatically on boot Running the role removes any configuration not supported by the role or not specified when running the role. The HA Cluster system role does not currently support SBD. ( BZ#1893743 ) The Networking system role now supports OWE Opportunistic Wireless Encryption (OWE) is a mode of opportunistic security for Wi-Fi networks that provides encryption of the wireless medium but no authentication, such as public hot spots. OWE uses encryption between Wi-Fi clients and access points, protecting them from sniffing attacks. With this enhancement, the Networking RHEL system role supports OWE. As a result, administrators can now use the Networking system role to configure connections to Wi-Fi networks which use OWE. ( BZ#1993379 ) The Networking system role now supports SAE In Wi-Fi protected access version 3 (WPA3) networks, the simultaneous authentication of equals (SAE) method ensures that the encryption key is not transmitted. With this enhancement, the Networking RHEL system role supports SAE. As a result, administrators can now use the Networking system role to configure connections to Wi-Fi networks, which use WPA-SAE. ( BZ#1993311 ) The Cockpit RHEL system role is now supported With this enhancement, you can install and configure the web console in your system. Consequently, you can manage web console in an automated manner. ( BZ#2021661 ) Add support for raid_level for LVM volumes The Storage RHEL system role can now specify the raid_level parameter for LVM volumes. As a result, LVM volumes can be grouped into RAIDs using the lvmraid feature. ( BZ#2016514 ) The NBDE client system role supports systems with static IP addresses Previously, restarting a system with a static IP address and configured with the NBDE client system role would change the system's IP address. With this change, systems with static IP addresses are supported by the NBDE client system role, and their IP addresses do not change after a reboot. ( BZ#1985022 ) Support for cached volumes is available in the Storage system role Storage RHEL system role can now create and manage cached LVM logical volumes. LVM cache can be used to improve performance of slower logical volumes by temporarily storing subsets of an LV's data on a smaller, faster device, for example an SSD. ( BZ#2016511 ) Support to add Elasticsearch username and password for authentication from rsyslog This update adds the Elasticsearch username and password parameters to the logging system role, to enable the rsyslog to authenticate to Elasticsearch using username and password. ( BZ#2010327 ) Ansible Core support for the RHEL system roles As of RHEL 8.6 GA release, Ansible Core is provided, with a limited scope of support, to enable RHEL supported automation use cases. Ansible Core replaces Ansible Engine which was previously provided in a separate repository. Ansible Core is available in the AppStream repository for RHEL. For more details on the supported use cases, see Scope of support for the Ansible Core package included in the RHEL 9 and RHEL 8.6 and later AppStream repositories . Users must manually migrate their systems from Ansible Engine to Ansible Core. For details on that, see Using Ansible in RHEL 8.6 and later . ( BZ#2012316 ) The network RHEL system role now supports both named and numeric routing tables in static routes. This update adds support for both the named and numeric routing tables in static routes, which is a prerequisite for supporting the policy routing (for example, source routing). The users can define policy routing rules later to instruct the system which table to use to determine the correct route. As a result, after the user specifies the table attribute in the route , the system can add routes into the routing table. ( BZ#2031521 ) The Certificate role consistently uses "Ansible_managed" comment in its hook scripts With this enhancement, the Certificate role generates pre-scripts and post-scripts to support providers, to which the role inserts the "Ansible managed" comment using the Ansible standard "ansible_managed" variable: /etc/certmonger/pre-scripts/script_name.sh /etc/certmonger/post-scripts/script_name.sh The comment indicates that the script files should not be directly edited because the Certificate role can overwrite the file. As a result, the configuration files contain a declaration stating that the configuration files are managed by Ansible. ( BZ#2054364 ) The Terminal session recording system role uses the "Ansible managed" comment in its managed configuration files The Terminal session recording role generates 2 configuration files: /etc/sssd/conf.d/sssd-session-recording.conf /etc/tlog/tlog-rec-session.conf With this update, the Terminal session recording role inserts the Ansible managed comment into the configuration files, using the standard Ansible variable ansible_managed . The comment indicates that the configuration files should not be directly edited because the Terminal session recording role can overwrite the file. As a result, the configuration files contain a declaration stating that the configuration files are managed by Ansible. ( BZ#2054363 ) Microsoft SQL system role now supports customized repository for disconnected or Satellite subscriptions Previously, users in disconnected environments that needed to pull packages from a custom server or Satellite users that needed to point to Satellite or Capsule had no support from Microsoft SQL Role . This update fixes it, by enabling users to provide a customized URL to use for RPM key, client and server mssql repositories. If no URL is provided, the mssql role uses the official Microsoft servers to download RPMs. ( BZ#2038256 ) The Microsoft SQL system role consistently uses "Ansible_managed" comment in its managed configuration files The mssql role generates the following configuration file: /var/opt/mssql/mssql.conf With this update, the Microsoft SQL role inserts the "Ansible managed" comment to the configuration files, using the Ansible standard ansible_managed variable. The comment indicates that the configuration files should not be directly edited because the mssql role can overwrite the file. As a result, the configuration files contain a declaration stating that the configuration files are managed by Ansible. ( BZ#2057651 ) Support to all bonding options added to the Networking system role This update provides support to all bonding options to the Networking RHEL system role. Consequently, it enables you to flexibly control the network transmission over the bonded interface. As a result, you can control the network transmission over the bonded interface by specifying several options to that interface. ( BZ#2008931 ) NetworkManager supports specifying a network card using its PCI address Previously, during setting a connection profile, NetworkManager was only allowed to specify a network card using either its name or MAC address. In this case, the device name is not stable and the MAC address requires inventory to maintain record of used MAC addresses. Now, you can specify a network card based on its PCI address in a connection profile. (BZ#1695634) A new option auto_gateway controls the default route behavior Previously, the DEFROUTE parameter was not configurable with configuration files but only manually configurable by naming every route. This update adds a new auto_gateway option in the ip configuration section for connections, with which you can control the default route behavior. You can configure auto_gateway in the following ways: If set to true , default gateway settings apply to a default route. If set to false , the default route is removed. If unspecified, the network role uses the default behavior of the selected network_provider . ( BZ#1897565 ) The VPN role consistently uses Ansible_managed comment in its managed configuration files The VPN role generates the following configuration file: /etc/ipsec.d/mesh.conf /etc/ipsec.d/policies/clear /etc/ipsec.d/policies/private /etc/ipsec.d/policies/private-or-clear With this update, the VPN role inserts the Ansible managed comment to the configuration files, using the Ansible standard ansible_managed variable. The comment indicates that the configuration files should not be directly edited because the VPN role can overwrite the file. As a result, the configuration files contain a declaration stating that the configuration files are managed by Ansible. ( BZ#2054365 ) New source parameter in the Firewall system role You can now use the source parameter of the Firewall system role to add or remove sources in the firewall configuration. ( BZ#1932678 ) The Networking system role now uses the 'Ansible managed' comment in its managed configuration files When using the initscripts provider, the Networking system role now generates commented ifcfg files in the /etc/sysconfig/network-scripts directory. The Networking role inserts the Ansible managed comment using the Ansible standard ansible_managed variable. The comment declares that an ifcfg file is managed by Ansible, and indicates that the ifcfg file should not be edited directly as the Networking role will overwrite the file. The Ansible managed comment is added when the provider is initscripts . When using the Networking role with the nm (NetworkManager) provider, the ifcfg file is managed by NetworkManager and not by the Networking role. ( BZ#2057656 ) The Firewall system role now supports setting the firewall default zone You can now set a default firewall zone in the Firewall system role. Zones represent a concept to manage incoming traffic more transparently. The zones are connected to networking interfaces or assigned a range of source addresses. Firewall rules for each zone are managed independently enabling the administrator to define complex firewall settings and apply them to the traffic. This feature allows setting the default zone used as the default zone to assign interfaces to, same as firewall-cmd --set-default-zone zone-name . ( BZ#2022458 ) The Metrics system role now generates files with the proper ansible_managed comment in the header Previously, the Metrics role did not add an ansible_managed header comment to files generated by the role. With this fix, the Metrics role adds the ansible_managed header comment to files it generates, and as a result, users can easily identify files generated by the Metrics role. ( BZ#2057645 ) The Postfix system role now generates files with the proper ansible_managed comment in the header Previously, the Postfix role did not add an ansible_managed header comment to files generated by the role. With this fix, the Postfix role adds the ansible_managed header comment to files it generates, and as a result, users can easily identify files generated by the Postfix role. ( BZ#2057661 ) 4.19. Virtualization Mediated devices are now supported by virtualization CLIs on IBM Z Using virt-install or virt-xml , you can now attach mediated devices to your virtual machines (VMs), such as vfio-ap and vfio-ccw. This for example enables more flexible management of DASD storage devices and cryptographic coprocessors on IBM Z hosts. In addition, using virt-install , you can create a VM that uses an existing DASD mediated device as its primary disk. For instructions to do so, see the Configuring and Managing Virtualization in RHEL 8 guide. (BZ#1995125) Virtualization support for Intel Atom P59 series processors With this update, virtualization on RHEL 8 adds support for the Intel Atom P59 series processors, formerly known as Snow Ridge. As a result, virtual machines hosted on RHEL 8 can now use the Snowridge CPU model and utilise new features that the processors provide. (BZ#1662007) ESXi hypervisor and SEV-ES is now fully supported You can now enable the AMD Secure Encrypted Virtualization-Encrypted State (SEV-ES) to secure RHEL virtual machines (VMs) on VMware's ESXi hypervisor, versions 7.0.2 and later. This feature was previously introduced in RHEL 8.4 as a Technology Preview. It is now fully supported. (BZ#1904496) Windows 11 and Windows Server 2022 guests are supported RHEL 8 now supports using Windows 11 and Windows Server 2022 as the guest operating systems on KVM virtual machines. (BZ#2036863, BZ#2004162) 4.20. RHEL in cloud environments RHEL 8 virtual machines are now supported on certain ARM64 hosts on Azure Virtual machines that use RHEL 8.6 or later as the guest operating system are now supported on Microsoft Azure hypervisors running on Ampere Altra ARM-based processors. (BZ#1949614) New SSH module for cloud-init With this update, an SSH module has been added to the cloud-init utility, which automatically generates host keys during instance creation. Note that with this change, the default cloud-init configuration has been updated. Therefore, if you had a local modification, make sure the /etc/cloud/cloud.cfg contains "ssh_genkeytypes: ['rsa', 'ecdsa', 'ed25519']" line. Otherwise, cloud-init creates an image which fails to start the sshd service. If this occurs, do the following to work around the problem: Make sure the /etc/cloud/cloud.cfg file contains the following line: Check whether /etc/ssh/ssh_host_* files exist in the instance. If the /etc/ssh/ssh_host_* files do not exist, use the following command to generate host keys: Restart the sshd service: (BZ#2115791) cloud-init supports user data on Microsoft Azure The --user-data option has been introduced for the cloud-init utility. Using this option, you can pass scripts and metadata from the Azure Instance Metadata Service (IMDS) when setting up a RHEL 8 virtual machine on Azure. (BZ#2023940) cloud-init supports the VMware GuestInfo datasource With this update, the cloud-init utility is able to read the datasource for VMware guestinfo data. As a result, using cloud-init to set up RHEL 8 virtual machines on VMware vSphere is now more efficient and reliable. (BZ#2026587) 4.21. Supportability A new package: rig RHEL 8 introduces the rig package, which provides the rig system monitoring and event handling utility. The rig utility is designed to assist system administrators and support engineers in diagnostic data collection for issues that are seemingly random in their occurrence, or occur at inopportune times for human intervention. (BZ#1888705) sos report now offers an estimate mode run This sos report update adds the --estimate-only option with which you can approximate the disk space required for collecting an sos report from a RHEL server. Running the sos report --estimate-only command: executes a dry run of sos report mimics all plugins consecutively and estimates their disk size. Note that the final disk space estimation is very approximate. Therefore, it is recommended to double the estimated value. (BZ#1873185) Red Hat Support Tool now uses Hydra APIs The Red Hat Support Tool has moved from the deprecated Strata APIs to the new Hydra APIs. This has no impact on functionality. However, if you have configured the firewall to allow only the Strata API /rs/ path explicitly, update it to /support/ to ensure the firewall works correctly. In addition, due to this change, you can now download files greater than 5 GB when using the Red Hat Support Tool . ( BZ#2018194 ) Red Hat Support Tool now supports Red Hat Secure FTP When using Red Hat Support Tool , you can now upload files to the case by the Red Hat Secure FTP . Red Hat Secure FTP is a more secure replacement of the deprecated Dropbox utility that Red Hat Support Tool used to support in its earlier versions. ( BZ#2018195 ) Red Hat Support Tool now supports S3 APIs The Red Hat Support Tool now uses S3 APIs to upload files to the Red Hat Technical Support case. As a result, users can upload a file greater than 1 GB to the case directly. (BZ#1767195) 4.22. Containers container-tools:4.0 stable stream is now available The container-tools:4.0 stable module stream, which contains the Podman, Buildah, Skopeo, and runc tools is now available. This update provides bug fixes and enhancements over the version. For instructions on how to upgrade from an earlier stream, see Switching to a later stream . (JIRA:RHELPLAN-100175) The NFS storage is now available You can now use the NFS file system as a backend storage for containers and images if your file system has xattr support. (JIRA:RHELPLAN-75169) The container-tools:rhel8 module has been updated The container-tools:rhel8 module, which contains the Podman, Buildah, Skopeo, crun, and runc tools is now available. This update provides a list of bug fixes and enhancements over the version. Notable changes include: Due to the changes in the network stack, containers created by Podman v3 and earlier will not be usable in v4.0 The native overlay file system is usable as a rootless user Support for NFS storage within a container Downgrading to earlier versions of Podman is not supported unless all containers are destroyed and recreated Podman tool has been upgraded to version 4.0, for further information about notable changes, see the upstream release notes . (JIRA:RHELPLAN-100174) Universal Base Images are now available on Docker Hub Previously, Universal Base Images were only available from the Red Hat container catalog. With this enhancement, Universal Base Images are also available from Docker Hub as a Verified Publisher image . (JIRA:RHELPLAN-101137) A podman container image is now available The registry.redhat.io/rhel8/podman container image, previously available as a Technology Preview, is now fully supported. The registry.redhat.io/rhel8/podman container image is a containerized implementation of the podman package. The podman tool manages containers and images, volumes mounted into those containers, and pods made of groups of containers. (JIRA:RHELPLAN-57941) Podman now supports auto-building and auto-running pods using a YAML file The podman play kube command automatically builds and runs multiple pods with multiple containers in the pods using a YAML file. (JIRA:RHELPLAN-108830) Podman now has ability to source subUID and subGID ranges from IdM The subUID and subGID ranges can now be managed by IdM. Instead of deploying the same /etc/subuid and /etc/subgid files onto every host, you can now define range in a single central storage. You have to modify the /etc/nsswitch.conf file and add sss to the services map line: services: files sss . For more details, see Managing subID ranges manually in IdM documentation. (JIRA:RHELPLAN-101133) The openssl container image is now available The openssl image provides an openssl command-line tool for using the various functions of the OpenSSL crypto library. Using the OpenSSL library, you can generate private keys, create certificate signing requests (CSRs), and display certificate information. The openssl container image is available in these repositories: registry.redhat.io/rhel8/openssl registry.access.redhat.com/ubi8/openssl (JIRA:RHELPLAN-101138) Netavark network stack is now available The new network stack available starting with Podman 4.1.1-7 consists of two tools, the Netavark network setup tool and the Aardvark DNS server. The Netavark stack, previously available as a Technology Preview, is with the release of the RHBA-2022:7127 advisory fully supported. This network stack has the following capabilities: Configuration of container networks using the JSON configuration file Creating, managing, and removing network interfaces, including bridge and MACVLAN interfaces Configuring firewall settings, such as network address translation (NAT) and port mapping rules IPv4 and IPv6 Improved capability for containers in multiple networks Container DNS resolution using the aardvark-dns project Note You have to use the same version of Netavark stack and the Aardvark authoritative DNS server. (JIRA:RHELPLAN-137623) Podman now supports the --health-on-failure option With the release of the RHBA-2022:7127 advisory. the podman run and podman create commands now support the --health-on-failure option to determine the actions to be performed when the status of a container becomes unhealthy. The --health-on-failure option supports four actions: none : Take no action, this is the default action. kill : Kill the container. restart : Restart the container. stop : Stop the container. Note Do not combine the restart action with the --restart option. When running inside of a systemd unit, consider using the kill or stop action instead to make use of systemd's restart policy. ( BZ#2130912 )
[ "semodule -l --checksum | grep localmodule localmodule sha256:db002f64ddfa3983257b42b54da7b182c9b2e476f47880ae3494f9099e1a42bd /usr/libexec/selinux/hll/pp localmodule.pp | sha256sum db002f64ddfa3983257b42b54da7b182c9b2e476f47880ae3494f9099e1a42bd -", "nmcli connection modify ovs-iface0 ovs-dpdk.nrxq 2", "[...] strace --secontext=full,mismatch -e statx stat /home/user/file statx(AT_FDCWD, \"/home/user/file\" [system_u:object_r:user_home_t:s0!!unconfined_u:object_r:user_home_t:s0], strace --secontext=mismatch -e statx stat /home/user/file statx(AT_FDCWD, \"/home/user/file\" [user_home_t:s0],", "yum module install php:8.0", "yum install gcc-toolset-11", "scl enable gcc-toolset-11 tool", "scl enable gcc-toolset-11 bash", "podman pull registry.redhat.io/<image_name>", "yum module install log4j:2", "i915.force_probe= PCI_ID", "ssh_genkeytypes: ['rsa', 'ecdsa', 'ed25519']", "cloud-init single --name cc_ssh", "systemctl restart sshd" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.6_release_notes/new-features
Chapter 14. Scaling clusters by adding or removing brokers
Chapter 14. Scaling clusters by adding or removing brokers Scaling Kafka clusters by adding brokers can increase the performance and reliability of the cluster. Adding more brokers increases available resources, allowing the cluster to handle larger workloads and process more messages. It can also improve fault tolerance by providing more replicas and backups. Conversely, removing underutilized brokers can reduce resource consumption and improve efficiency. Scaling must be done carefully to avoid disruption or data loss. By redistributing partitions across all brokers in the cluster, the resource utilization of each broker is reduced, which can increase the overall throughput of the cluster. Note To increase the throughput of a Kafka topic, you can increase the number of partitions for that topic. This allows the load of the topic to be shared between different brokers in the cluster. However, if every broker is constrained by a specific resource (such as I/O), adding more partitions will not increase the throughput. In this case, you need to add more brokers to the cluster. Adding brokers when running a multi-node Kafka cluster affects the number of brokers in the cluster that act as replicas. The actual replication factor for topics is determined by settings for the default.replication.factor and min.insync.replicas , and the number of available brokers. For example, a replication factor of 3 means that each partition of a topic is replicated across three brokers, ensuring fault tolerance in the event of a broker failure. Example replica configuration default.replication.factor = 3 min.insync.replicas = 2 When you add or remove brokers, Kafka does not automatically reassign partitions. The best way to do this is using Cruise Control. You can use Cruise Control's add_broker and remove_broker modes when scaling a cluster up or down. Use the add_broker mode after scaling up a Kafka cluster to move partition replicas from existing brokers to the newly added brokers. Use the remove_broker mode before scaling down a Kafka cluster to move partition replicas off the brokers that are going to be removed. 14.1. Scaling controller clusters dynamically Dynamic controller quorums support scaling without requiring system downtime. Dynamic scaling is useful not only for adding or removing controllers, but supports the following: Replacing controllers because of hardware failure Migrating clusters onto new machines Moving nodes from dedicated controller roles to combined roles or vice versa A dynamic quorum is specified in the controller configuration using the controller.quorum.bootstrap.servers property to list host:port endpoints for each controller. Only one controller can be added or removed from the cluster at a time, so complex quorum changes are implemented as a series of single changes. New controllers join as observers , replicating the metadata log but not counting towards the quorum. When caught up with the active controller, the new controller is eligible to join the quorum. When removing controllers, it's recommended that they are first shutdown to avoid unnecessary leader elections. If the removed controller is the active one, it will step down from the quorum only after the new quorum is confirmed. However, it will not include itself when calculating the last commit position in the __cluster_metadata log. In a dynamic quorum, the active Kraft version is at 1 or above for all cluster nodes. Find the active KRaft version using the kafka-features.sh tool: ./bin/kafka-features.sh --bootstrap-controller localhost:9093 describe | grep kraft.version In this example output, the active version ( FinalizedVersionLevel ) in the Kafka cluster is 1: Feature: kraft.version SupportedMinVersion: 0 SupportedMaxVersion: 1 FinalizedVersionLevel: 1 Epoch: 5 If the kraft.version property shows an active version level of 0 or is absent, you are using a static quorum. If it is 1 or above, you are using a dynamic quorum. Note It's possible to configure a static quorum, but it is not a recommended approach as it requires downtime when scaling. 14.2. Adding new controllers To add a new controller to an existing dynamic controller quorum in Kafka, create a new controller, monitor its replication status, and then integrate it into the cluster. Prerequisites Streams for Apache Kafka is installed on the host , and the configuration files and tools are available. This procedure uses the kafka-storage.sh , kafka-server-start.sh and kafka-metadata-quorum.sh tools. Administrative access to the controller nodes. Procedure Configure a new controller node using the controller.properties file. At a minimum, the new controller requires the following configuration: A unique node ID Listener name used by the controller quorum A quorum of controllers Example controller configuration process.roles=controller node.id=1 listeners=CONTROLLER://0.0.0.0:9092 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.bootstrap.servers=localhost:9090, localhost:9091, localhost:9092 The controller.quorum.bootstrap.servers configuration includes the host and port of the new controller and each other controller already present in the cluster. Update controller.quorum.bootstrap.servers in the configuration of each node in the cluster with the host and port of the new controller. Set the log directory ID for the new controller: ./bin/kafka-storage.sh format --cluster-id <cluster_id> --config server.properties --no-initial-controllers By using the no-initial-controllers option, the controller is initialized without it joining the controller quorum. Start the controller node ./bin/kafka-server-start.sh ./config/kraft/controller.properties Monitor the replication progress of the new controller: ./bin/kafka-metadata-quorum.sh --bootstrap-server localhost:9092 --replication Wait until the new controller has caught up with the active controller before proceeding. Add the new controller to the controller quorum: ./bin/kafka-metadata-quorum.sh --command-config controller.properties --bootstrap-controller localhost:9092 add-controller 14.3. Removing controllers To remove a controller from an existing dynamic controller quorum in Kafka, use the kafka-metadata-quorum.sh tool. Prerequisites Streams for Apache Kafka is installed on the host , and the configuration files and tools are available. This procedure uses the kafka-server-stop.sh and kafka-metadata-quorum.sh tools. Administrative access to the controller nodes. Procedure Stop the controller node ./bin/kafka-server-stop.sh Locate the ID of the controller and its directory ID to be able to remove it from the controller quorum. You can find this information in the meta.properties file of the metadata log . Remove the controller from the controller quorum: ./bin/kafka-metadata-quorum.sh --bootstrap-controller localhost:9092 remove-controller --controller-id <id> --controller-directory-id <directory_id> Update controller.quorum.bootstrap.servers in the configuration of each node in the cluster to remove the host and port of the controller removed from the controller quorum. 14.4. Unregistering nodes after scale-down operations After removing a node from a Kafka cluster, use the kafka-cluster.sh script to unregister the node from the cluster metadata. Failing to unregister removed nodes leads to stale metadata, which causes operational issues. Prerequisites Before unregistering a node, ensure the following tasks are completed: Reassign the partitions from the node you plan to remove to the remaining brokers using the Cruise control remove-nodes operation. Update the cluster configuration, if necessary, to adjust the replication factor for topics ( default.replication.factor ) and the minimum required number of in-sync replica acknowledgements ( min.insync.replicas ). Stop the Kafka broker service on the node and remove the node from the cluster. Procedure Unregister the removed node from the cluster: ./bin/kafka-cluster.sh unregister \ --bootstrap-server <broker_host>:<port> \ --id <node_id_number> Verify the current state of the cluster by describing the topics: ./bin/kafka-topics.sh \ --bootstrap-server <broker_host>:<port> \ --describe
[ "default.replication.factor = 3 min.insync.replicas = 2", "./bin/kafka-features.sh --bootstrap-controller localhost:9093 describe | grep kraft.version", "Feature: kraft.version SupportedMinVersion: 0 SupportedMaxVersion: 1 FinalizedVersionLevel: 1 Epoch: 5", "process.roles=controller node.id=1 listeners=CONTROLLER://0.0.0.0:9092 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.bootstrap.servers=localhost:9090, localhost:9091, localhost:9092", "./bin/kafka-storage.sh format --cluster-id <cluster_id> --config server.properties --no-initial-controllers", "./bin/kafka-server-start.sh ./config/kraft/controller.properties", "./bin/kafka-metadata-quorum.sh --bootstrap-server localhost:9092 --replication", "./bin/kafka-metadata-quorum.sh --command-config controller.properties --bootstrap-controller localhost:9092 add-controller", "./bin/kafka-server-stop.sh", "./bin/kafka-metadata-quorum.sh --bootstrap-controller localhost:9092 remove-controller --controller-id <id> --controller-directory-id <directory_id>", "./bin/kafka-cluster.sh unregister --bootstrap-server <broker_host>:<port> --id <node_id_number>", "./bin/kafka-topics.sh --bootstrap-server <broker_host>:<port> --describe" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_streams_for_apache_kafka_on_rhel_in_kraft_mode/con-scaling-kafka-clusters-str
Chapter 7. MachineConfigPool [machineconfiguration.openshift.io/v1]
Chapter 7. MachineConfigPool [machineconfiguration.openshift.io/v1] Description MachineConfigPool describes a pool of MachineConfigs. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object MachineConfigPoolSpec is the spec for MachineConfigPool resource. status object MachineConfigPoolStatus is the status for MachineConfigPool resource. 7.1.1. .spec Description MachineConfigPoolSpec is the spec for MachineConfigPool resource. Type object Property Type Description configuration object The targeted MachineConfig object for the machine config pool. machineConfigSelector object machineConfigSelector specifies a label selector for MachineConfigs. Refer https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ on how label and selectors work. maxUnavailable integer-or-string maxUnavailable defines either an integer number or percentage of nodes in the pool that can go Unavailable during an update. This includes nodes Unavailable for any reason, including user initiated cordons, failing nodes, etc. The default value is 1. A value larger than 1 will mean multiple nodes going unavailable during the update, which may affect your workload stress on the remaining nodes. You cannot set this value to 0 to stop updates (it will default back to 1); to stop updates, use the 'paused' property instead. Drain will respect Pod Disruption Budgets (PDBs) such as etcd quorum guards, even if maxUnavailable is greater than one. nodeSelector object nodeSelector specifies a label selector for Machines paused boolean paused specifies whether or not changes to this machine config pool should be stopped. This includes generating new desiredMachineConfig and update of machines. 7.1.2. .spec.configuration Description The targeted MachineConfig object for the machine config pool. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency source array source is the list of MachineConfig objects that were used to generate the single MachineConfig object specified in content . source[] object ObjectReference contains enough information to let you inspect or modify the referred object. --- New uses of this type are discouraged because of difficulty describing its usage when embedded in APIs. 1. Ignored fields. It includes many fields which are not generally honored. For instance, ResourceVersion and FieldPath are both very rarely valid in actual usage. 2. Invalid usage help. It is impossible to add specific help for individual usage. In most embedded usages, there are particular restrictions like, "must refer only to types A and B" or "UID not honored" or "name must be restricted". Those cannot be well described when embedded. 3. Inconsistent validation. Because the usages are different, the validation rules are different by usage, which makes it hard for users to predict what will happen. 4. The fields are both imprecise and overly precise. Kind is not a precise mapping to a URL. This can produce ambiguity during interpretation and require a REST mapping. In most cases, the dependency is on the group,resource tuple and the version of the actual struct is irrelevant. 5. We cannot easily change it. Because this type is embedded in many locations, updates to this type will affect numerous schemas. Don't make new APIs embed an underspecified API type they do not control. Instead of using this type, create a locally provided and used type that is well-focused on your reference. For example, ServiceReferences for admission registration: https://github.com/kubernetes/api/blob/release-1.17/admissionregistration/v1/types.go#L533 . uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 7.1.3. .spec.configuration.source Description source is the list of MachineConfig objects that were used to generate the single MachineConfig object specified in content . Type array 7.1.4. .spec.configuration.source[] Description ObjectReference contains enough information to let you inspect or modify the referred object. --- New uses of this type are discouraged because of difficulty describing its usage when embedded in APIs. 1. Ignored fields. It includes many fields which are not generally honored. For instance, ResourceVersion and FieldPath are both very rarely valid in actual usage. 2. Invalid usage help. It is impossible to add specific help for individual usage. In most embedded usages, there are particular restrictions like, "must refer only to types A and B" or "UID not honored" or "name must be restricted". Those cannot be well described when embedded. 3. Inconsistent validation. Because the usages are different, the validation rules are different by usage, which makes it hard for users to predict what will happen. 4. The fields are both imprecise and overly precise. Kind is not a precise mapping to a URL. This can produce ambiguity during interpretation and require a REST mapping. In most cases, the dependency is on the group,resource tuple and the version of the actual struct is irrelevant. 5. We cannot easily change it. Because this type is embedded in many locations, updates to this type will affect numerous schemas. Don't make new APIs embed an underspecified API type they do not control. Instead of using this type, create a locally provided and used type that is well-focused on your reference. For example, ServiceReferences for admission registration: https://github.com/kubernetes/api/blob/release-1.17/admissionregistration/v1/types.go#L533 . Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 7.1.5. .spec.machineConfigSelector Description machineConfigSelector specifies a label selector for MachineConfigs. Refer https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ on how label and selectors work. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 7.1.6. .spec.machineConfigSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 7.1.7. .spec.machineConfigSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 7.1.8. .spec.nodeSelector Description nodeSelector specifies a label selector for Machines Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 7.1.9. .spec.nodeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 7.1.10. .spec.nodeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 7.1.11. .status Description MachineConfigPoolStatus is the status for MachineConfigPool resource. Type object Property Type Description certExpirys array certExpirys keeps track of important certificate expiration data certExpirys[] object ceryExpiry contains the bundle name and the expiry date conditions array conditions represents the latest available observations of current state. conditions[] object MachineConfigPoolCondition contains condition information for an MachineConfigPool. configuration object configuration represents the current MachineConfig object for the machine config pool. degradedMachineCount integer degradedMachineCount represents the total number of machines marked degraded (or unreconcilable). A node is marked degraded if applying a configuration failed.. machineCount integer machineCount represents the total number of machines in the machine config pool. observedGeneration integer observedGeneration represents the generation observed by the controller. readyMachineCount integer readyMachineCount represents the total number of ready machines targeted by the pool. unavailableMachineCount integer unavailableMachineCount represents the total number of unavailable (non-ready) machines targeted by the pool. A node is marked unavailable if it is in updating state or NodeReady condition is false. updatedMachineCount integer updatedMachineCount represents the total number of machines targeted by the pool that have the CurrentMachineConfig as their config. 7.1.12. .status.certExpirys Description certExpirys keeps track of important certificate expiration data Type array 7.1.13. .status.certExpirys[] Description ceryExpiry contains the bundle name and the expiry date Type object Required bundle subject Property Type Description bundle string bundle is the name of the bundle in which the subject certificate resides expiry string expiry is the date after which the certificate will no longer be valid subject string subject is the subject of the certificate 7.1.14. .status.conditions Description conditions represents the latest available observations of current state. Type array 7.1.15. .status.conditions[] Description MachineConfigPoolCondition contains condition information for an MachineConfigPool. Type object Property Type Description lastTransitionTime `` lastTransitionTime is the timestamp corresponding to the last status change of this condition. message string message is a human readable description of the details of the last transition, complementing reason. reason string reason is a brief machine readable explanation for the condition's last transition. status string status of the condition, one of ('True', 'False', 'Unknown'). type string type of the condition, currently ('Done', 'Updating', 'Failed'). 7.1.16. .status.configuration Description configuration represents the current MachineConfig object for the machine config pool. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency source array source is the list of MachineConfig objects that were used to generate the single MachineConfig object specified in content . source[] object ObjectReference contains enough information to let you inspect or modify the referred object. --- New uses of this type are discouraged because of difficulty describing its usage when embedded in APIs. 1. Ignored fields. It includes many fields which are not generally honored. For instance, ResourceVersion and FieldPath are both very rarely valid in actual usage. 2. Invalid usage help. It is impossible to add specific help for individual usage. In most embedded usages, there are particular restrictions like, "must refer only to types A and B" or "UID not honored" or "name must be restricted". Those cannot be well described when embedded. 3. Inconsistent validation. Because the usages are different, the validation rules are different by usage, which makes it hard for users to predict what will happen. 4. The fields are both imprecise and overly precise. Kind is not a precise mapping to a URL. This can produce ambiguity during interpretation and require a REST mapping. In most cases, the dependency is on the group,resource tuple and the version of the actual struct is irrelevant. 5. We cannot easily change it. Because this type is embedded in many locations, updates to this type will affect numerous schemas. Don't make new APIs embed an underspecified API type they do not control. Instead of using this type, create a locally provided and used type that is well-focused on your reference. For example, ServiceReferences for admission registration: https://github.com/kubernetes/api/blob/release-1.17/admissionregistration/v1/types.go#L533 . uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 7.1.17. .status.configuration.source Description source is the list of MachineConfig objects that were used to generate the single MachineConfig object specified in content . Type array 7.1.18. .status.configuration.source[] Description ObjectReference contains enough information to let you inspect or modify the referred object. --- New uses of this type are discouraged because of difficulty describing its usage when embedded in APIs. 1. Ignored fields. It includes many fields which are not generally honored. For instance, ResourceVersion and FieldPath are both very rarely valid in actual usage. 2. Invalid usage help. It is impossible to add specific help for individual usage. In most embedded usages, there are particular restrictions like, "must refer only to types A and B" or "UID not honored" or "name must be restricted". Those cannot be well described when embedded. 3. Inconsistent validation. Because the usages are different, the validation rules are different by usage, which makes it hard for users to predict what will happen. 4. The fields are both imprecise and overly precise. Kind is not a precise mapping to a URL. This can produce ambiguity during interpretation and require a REST mapping. In most cases, the dependency is on the group,resource tuple and the version of the actual struct is irrelevant. 5. We cannot easily change it. Because this type is embedded in many locations, updates to this type will affect numerous schemas. Don't make new APIs embed an underspecified API type they do not control. Instead of using this type, create a locally provided and used type that is well-focused on your reference. For example, ServiceReferences for admission registration: https://github.com/kubernetes/api/blob/release-1.17/admissionregistration/v1/types.go#L533 . Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 7.2. API endpoints The following API endpoints are available: /apis/machineconfiguration.openshift.io/v1/machineconfigpools DELETE : delete collection of MachineConfigPool GET : list objects of kind MachineConfigPool POST : create a MachineConfigPool /apis/machineconfiguration.openshift.io/v1/machineconfigpools/{name} DELETE : delete a MachineConfigPool GET : read the specified MachineConfigPool PATCH : partially update the specified MachineConfigPool PUT : replace the specified MachineConfigPool /apis/machineconfiguration.openshift.io/v1/machineconfigpools/{name}/status GET : read status of the specified MachineConfigPool PATCH : partially update status of the specified MachineConfigPool PUT : replace status of the specified MachineConfigPool 7.2.1. /apis/machineconfiguration.openshift.io/v1/machineconfigpools HTTP method DELETE Description delete collection of MachineConfigPool Table 7.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind MachineConfigPool Table 7.2. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPoolList schema 401 - Unauthorized Empty HTTP method POST Description create a MachineConfigPool Table 7.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.4. Body parameters Parameter Type Description body MachineConfigPool schema Table 7.5. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 201 - Created MachineConfigPool schema 202 - Accepted MachineConfigPool schema 401 - Unauthorized Empty 7.2.2. /apis/machineconfiguration.openshift.io/v1/machineconfigpools/{name} Table 7.6. Global path parameters Parameter Type Description name string name of the MachineConfigPool HTTP method DELETE Description delete a MachineConfigPool Table 7.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified MachineConfigPool Table 7.9. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified MachineConfigPool Table 7.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.11. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified MachineConfigPool Table 7.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.13. Body parameters Parameter Type Description body MachineConfigPool schema Table 7.14. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 201 - Created MachineConfigPool schema 401 - Unauthorized Empty 7.2.3. /apis/machineconfiguration.openshift.io/v1/machineconfigpools/{name}/status Table 7.15. Global path parameters Parameter Type Description name string name of the MachineConfigPool HTTP method GET Description read status of the specified MachineConfigPool Table 7.16. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified MachineConfigPool Table 7.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.18. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified MachineConfigPool Table 7.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.20. Body parameters Parameter Type Description body MachineConfigPool schema Table 7.21. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 201 - Created MachineConfigPool schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/machine_apis/machineconfigpool-machineconfiguration-openshift-io-v1
Chapter 26. Configuring a virtual domain as a resource
Chapter 26. Configuring a virtual domain as a resource You can configure a virtual domain that is managed by the libvirt virtualization framework as a cluster resource with the pcs resource create command, specifying VirtualDomain as the resource type. When configuring a virtual domain as a resource, take the following considerations into account: A virtual domain should be stopped before you configure it as a cluster resource. Once a virtual domain is a cluster resource, it should not be started, stopped, or migrated except through the cluster tools. Do not configure a virtual domain that you have configured as a cluster resource to start when its host boots. All nodes allowed to run a virtual domain must have access to the necessary configuration files and storage devices for that virtual domain. If you want the cluster to manage services within the virtual domain itself, you can configure the virtual domain as a guest node. 26.1. Virtual domain resource options The following table describes the resource options you can configure for a VirtualDomain resource. Table 26.1. Resource Options for Virtual Domain Resources Field Default Description config (required) Absolute path to the libvirt configuration file for this virtual domain. hypervisor System dependent Hypervisor URI to connect to. You can determine the system's default URI by running the virsh --quiet uri command. force_stop 0 Always forcefully shut down ("destroy") the domain on stop. The default behavior is to resort to a forceful shutdown only after a graceful shutdown attempt has failed. You should set this to true only if your virtual domain (or your virtualization back end) does not support graceful shutdown. migration_transport System dependent Transport used to connect to the remote hypervisor while migrating. If this parameter is omitted, the resource will use libvirt 's default transport to connect to the remote hypervisor. migration_network_suffix Use a dedicated migration network. The migration URI is composed by adding this parameter's value to the end of the node name. If the node name is a fully qualified domain name (FQDN), insert the suffix immediately prior to the first period (.) in the FQDN. Ensure that this composed host name is locally resolvable and the associated IP address is reachable through the favored network. monitor_scripts To additionally monitor services within the virtual domain, add this parameter with a list of scripts to monitor. Note : When monitor scripts are used, the start and migrate_from operations will complete only when all monitor scripts have completed successfully. Be sure to set the timeout of these operations to accommodate this delay autoset_utilization_cpu true If set to true , the agent will detect the number of domainU 's vCPU s from virsh , and put it into the CPU utilization of the resource when the monitor is executed. autoset_utilization_hv_memory true If set it true, the agent will detect the number of Max memory from virsh , and put it into the hv_memory utilization of the source when the monitor is executed. migrateport random highport This port will be used in the qemu migrate URI. If unset, the port will be a random highport. snapshot Path to the snapshot directory where the virtual machine image will be stored. When this parameter is set, the virtual machine's RAM state will be saved to a file in the snapshot directory when stopped. If on start a state file is present for the domain, the domain will be restored to the same state it was in right before it stopped last. This option is incompatible with the force_stop option. In addition to the VirtualDomain resource options, you can configure the allow-migrate metadata option to allow live migration of the resource to another node. When this option is set to true , the resource can be migrated without loss of state. When this option is set to false , which is the default state, the virtual domain will be shut down on the first node and then restarted on the second node when it is moved from one node to the other. 26.2. Creating the virtual domain resource The following procedure creates a VirtualDomain resource in a cluster for a virtual machine you have previously created. Procedure To create the VirtualDomain resource agent for the management of the virtual machine, Pacemaker requires the virtual machine's xml configuration file to be dumped to a file on disk. For example, if you created a virtual machine named guest1 , dump the xml file to a file somewhere on one of the cluster nodes that will be allowed to run the guest. You can use a file name of your choosing; this example uses /etc/pacemaker/guest1.xml . Copy the virtual machine's xml configuration file to all of the other cluster nodes that will be allowed to run the guest, in the same location on each node. Ensure that all of the nodes allowed to run the virtual domain have access to the necessary storage devices for that virtual domain. Separately test that the virtual domain can start and stop on each node that will run the virtual domain. If it is running, shut down the guest node. Pacemaker will start the node when it is configured in the cluster. The virtual machine should not be configured to start automatically when the host boots. Configure the VirtualDomain resource with the pcs resource create command. For example, the following command configures a VirtualDomain resource named VM . Since the allow-migrate option is set to true a pcs resource move VM nodeX command would be done as a live migration. In this example migration_transport is set to ssh . Note that for SSH migration to work properly, keyless logging must work between nodes.
[ "virsh dumpxml guest1 > /etc/pacemaker/guest1.xml", "pcs resource create VM VirtualDomain config=/etc/pacemaker/guest1.xml migration_transport=ssh meta allow-migrate=true" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_high_availability_clusters/assembly_configuring-virtual-domain-as-a-resource-configuring-and-managing-high-availability-clusters
21.3. Methods
21.3. Methods 21.3.1. Creating a Role Creation of a role requires values for name , administrative and a list of initial permits . Example 21.2. Creating a role 21.3.2. Updating a Role The name , description and administrative elements are updatable post-creation. Example 21.3. Updating a role 21.3.3. Removing a Role Removal of a role requires a DELETE request. Example 21.4. Removing a role
[ "POST /ovirt-engine/api/roles HTTP/1.1 Accept: application/xml Content-type: application/xml <role> <name>Finance Role</name> <administrative>true</administrative> <permits> <permit id=\"1\"/> </permits> </role>", "PUT /ovirt-engine/api/roles/8de42ad7-f307-408b-80e8-9d28b85adfd7 HTTP/1.1 Accept: application/xml Content-type: application/xml <role> <name>Engineering Role</name> <description>Standard users in the Engineering Role</description> <administrative>false</administrative> </role>", "DELETE /ovirt-engine/api/roles/8de42ad7-f307-408b-80e8-9d28b85adfd7 HTTP/1.1 HTTP/1.1 204 No Content" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/sect-methods1
Chapter 3. Logging 5.6
Chapter 3. Logging 5.6 3.1. Logging 5.6 Release Notes Note The logging subsystem for Red Hat OpenShift is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-X where X is the version of logging you have installed. 3.1.1. Logging 5.6.11 This release includes OpenShift Logging Bug Fix Release 5.6.11 . 3.1.1.1. Bug fixes Before this update, the LokiStack gateway cached authorized requests very broadly. As a result, this caused wrong authorization results. With this update, LokiStack gateway caches on a more fine-grained basis which resolves this issue. ( LOG-4435 ) 3.1.1.2. CVEs CVE-2023-3899 CVE-2023-32360 CVE-2023-34969 3.1.2. Logging 5.6.8 This release includes OpenShift Logging Bug Fix Release 5.6.8 . 3.1.2.1. Bug fixes Before this update, the vector collector terminated unexpectedly when input match label values contained a / character within the ClusterLogForwarder . This update resolves the issue by quoting the match label, enabling the collector to start and collect logs. ( LOG-4091 ) Before this update, when viewing logs within the OpenShift Container Platform web console, clicking the more data available option loaded more log entries only the first time it was clicked. With this update, more entries are loaded with each click. ( OU-187 ) Before this update, when viewing logs within the OpenShift Container Platform web console, clicking the streaming option would only display the streaming logs message without showing the actual logs. With this update, both the message and the log stream are displayed correctly. ( OU-189 ) Before this update, the Loki Operator reset errors in a way that made identifying configuration problems difficult to troubleshoot. With this update, errors persist until the configuration error is resolved. ( LOG-4158 ) Before this update, clusters with more than 8,000 namespaces caused Elasticsearch to reject queries because the list of namespaces was larger than the http.max_header_size setting. With this update, the default value for header size has been increased, resolving the issue. ( LOG-4278 ) 3.1.2.2. CVEs CVE-2020-24736 CVE-2022-48281 CVE-2023-1667 CVE-2023-2283 CVE-2023-24329 CVE-2023-26604 CVE-2023-28466 3.1.3. Logging 5.6.7 This release includes OpenShift Logging Bug Fix Release 5.6.7 . 3.1.3.1. Bug fixes Before this update, the LokiStack gateway returned label values for namespaces without applying the access rights of a user. With this update, the LokiStack gateway applies permissions to label value requests, resolving the issue. ( LOG-3728 ) Before this update, the time field of log messages did not parse as structured.time by default in Fluentd when the messages included a timestamp. With this update, parsed log messages will include a structured.time field if the output destination supports it. ( LOG-4090 ) Before this update, the LokiStack route configuration caused queries running longer than 30 seconds to time out. With this update, the LokiStack global and per-tenant queryTimeout settings affect the route timeout settings, resolving the issue. ( LOG-4130 ) Before this update, LokiStack CRs with values defined for tenant limits but not global limits caused the Loki Operator to crash. With this update, the Operator is able to process LokiStack CRs with only tenant limits defined, resolving the issue. ( LOG-4199 ) Before this update, the OpenShift Container Platform web console generated errors after an upgrade due to cached files of the prior version retained by the web browser. With this update, these files are no longer cached, resolving the issue. ( LOG-4099 ) Before this update, Vector generated certificate errors when forwarding to the default Loki instance. With this update, logs can be forwarded without errors to Loki by using Vector. ( LOG-4184 ) Before this update, the Cluster Logging Operator API required a certificate to be provided by a secret when the tls.insecureSkipVerify option was set to true . With this update, the Cluster Logging Operator API no longer requires a certificate to be provided by a secret in such cases. The following configuration has been added to the Operator's CR: tls.verify_certificate = false tls.verify_hostname = false ( LOG-4146 ) 3.1.3.2. CVEs CVE-2021-26341 CVE-2021-33655 CVE-2021-33656 CVE-2022-1462 CVE-2022-1679 CVE-2022-1789 CVE-2022-2196 CVE-2022-2663 CVE-2022-3028 CVE-2022-3239 CVE-2022-3522 CVE-2022-3524 CVE-2022-3564 CVE-2022-3566 CVE-2022-3567 CVE-2022-3619 CVE-2022-3623 CVE-2022-3625 CVE-2022-3627 CVE-2022-3628 CVE-2022-3707 CVE-2022-3970 CVE-2022-4129 CVE-2022-20141 CVE-2022-25147 CVE-2022-25265 CVE-2022-30594 CVE-2022-36227 CVE-2022-39188 CVE-2022-39189 CVE-2022-41218 CVE-2022-41674 CVE-2022-42703 CVE-2022-42720 CVE-2022-42721 CVE-2022-42722 CVE-2022-43750 CVE-2022-47929 CVE-2023-0394 CVE-2023-0461 CVE-2023-1195 CVE-2023-1582 CVE-2023-2491 CVE-2023-22490 CVE-2023-23454 CVE-2023-23946 CVE-2023-25652 CVE-2023-25815 CVE-2023-27535 CVE-2023-29007 3.1.4. Logging 5.6.6 This release includes OpenShift Logging Bug Fix Release 5.6.6 . 3.1.4.1. Bug fixes Before this update, dropping of messages occurred when configuring the ClusterLogForwarder custom resource to write to a Kafka output topic that matched a key in the payload due to an error. With this update, the issue is resolved by prefixing Fluentd's buffer name with an underscore. ( LOG-3458 ) Before this update, premature closure of watches occurred in Fluentd when inodes were reused and there were multiple entries with the same inode. With this update, the issue of premature closure of watches in the Fluentd position file is resolved. ( LOG-3629 ) Before this update, the detection of JavaScript client multi-line exceptions by Fluentd failed, resulting in printing them as multiple lines. With this update, exceptions are output as a single line, resolving the issue.( LOG-3761 ) Before this update, direct upgrades from the Red Hat Openshift Logging Operator version 4.6 to version 5.6 were allowed, resulting in functionality issues. With this update, upgrades must be within two versions, resolving the issue. ( LOG-3837 ) Before this update, metrics were not displayed for Splunk or Google Logging outputs. With this update, the issue is resolved by sending metrics for HTTP endpoints.( LOG-3932 ) Before this update, when the ClusterLogForwarder custom resource was deleted, collector pods remained running. With this update, collector pods do not run when log forwarding is not enabled. ( LOG-4030 ) Before this update, a time range could not be selected in the OpenShift Container Platform web console by clicking and dragging over the logs histogram. With this update, clicking and dragging can be used to successfully select a time range. ( LOG-4101 ) Before this update, Fluentd hash values for watch files were generated using the paths to log files, resulting in a non unique hash upon log rotation. With this update, hash values for watch files are created with inode numbers, resolving the issue. ( LOG-3633 ) Before this update, clicking on the Show Resources link in the OpenShift Container Platform web console did not produce any effect. With this update, the issue is resolved by fixing the functionality of the Show Resources link to toggle the display of resources for each log entry. ( LOG-4118 ) 3.1.4.2. CVEs CVE-2023-21930 CVE-2023-21937 CVE-2023-21938 CVE-2023-21939 CVE-2023-21954 CVE-2023-21967 CVE-2023-21968 CVE-2023-28617 3.1.5. Logging 5.6.5 This release includes OpenShift Logging Bug Fix Release 5.6.5 . 3.1.5.1. Bug fixes Before this update, the template definitions prevented Elasticsearch from indexing some labels and namespace_labels, causing issues with data ingestion. With this update, the fix replaces dots and slashes in labels to ensure proper ingestion, effectively resolving the issue. ( LOG-3419 ) Before this update, if the Logs page of the OpenShift Web Console failed to connect to the LokiStack, a generic error message was displayed, providing no additional context or troubleshooting suggestions. With this update, the error message has been enhanced to include more specific details and recommendations for troubleshooting. ( LOG-3750 ) Before this update, time range formats were not validated, leading to errors selecting a custom date range. With this update, time formats are now validated, enabling users to select a valid range. If an invalid time range format is selected, an error message is displayed to the user. ( LOG-3583 ) Before this update, when searching logs in Loki, even if the length of an expression did not exceed 5120 characters, the query would fail in many cases. With this update, query authorization label matchers have been optimized, resolving the issue. ( LOG-3480 ) Before this update, the Loki Operator failed to produce a memberlist configuration that was sufficient for locating all the components when using a memberlist for private IPs. With this update, the fix ensures that the generated configuration includes the advertised port, allowing for successful lookup of all components. ( LOG-4008 ) 3.1.5.2. CVEs CVE-2022-4269 CVE-2022-4378 CVE-2023-0266 CVE-2023-0361 CVE-2023-0386 CVE-2023-27539 CVE-2023-28120 3.1.6. Logging 5.6.4 This release includes OpenShift Logging Bug Fix Release 5.6.4 . 3.1.6.1. Bug fixes Before this update, when LokiStack was deployed as the log store, the logs generated by Loki pods were collected and sent to LokiStack. With this update, the logs generated by Loki are excluded from collection and will not be stored. ( LOG-3280 ) Before this update, when the query editor on the Logs page of the OpenShift Web Console was empty, the drop-down menus did not populate. With this update, if an empty query is attempted, an error message is displayed and the drop-down menus now populate as expected. ( LOG-3454 ) Before this update, when the tls.insecureSkipVerify option was set to true , the Cluster Logging Operator would generate incorrect configuration. As a result, the operator would fail to send data to Elasticsearch when attempting to skip certificate validation. With this update, the Cluster Logging Operator generates the correct TLS configuration even when tls.insecureSkipVerify is enabled. As a result, data can be sent successfully to Elasticsearch even when attempting to skip certificate validation. ( LOG-3475 ) Before this update, when structured parsing was enabled and messages were forwarded to multiple destinations, they were not deep copied. This resulted in some of the received logs including the structured message, while others did not. With this update, the configuration generation has been modified to deep copy messages before JSON parsing. As a result, all received messages now have structured messages included, even when they are forwarded to multiple destinations. ( LOG-3640 ) Before this update, if the collection field contained {} it could result in the Operator crashing. With this update, the Operator will ignore this value, allowing the operator to continue running smoothly without interruption. ( LOG-3733 ) Before this update, the nodeSelector attribute for the Gateway component of LokiStack did not have any effect. With this update, the nodeSelector attribute functions as expected. ( LOG-3783 ) Before this update, the static LokiStack memberlist configuration relied solely on private IP networks. As a result, when the OpenShift Container Platform cluster pod network was configured with a public IP range, the LokiStack pods would crashloop. With this update, the LokiStack administrator now has the option to use the pod network for the memberlist configuration. This resolves the issue and prevents the LokiStack pods from entering a crashloop state when the OpenShift Container Platform cluster pod network is configured with a public IP range. ( LOG-3814 ) Before this update, if the tls.insecureSkipVerify field was set to true , the Cluster Logging Operator would generate an incorrect configuration. As a result, the Operator would fail to send data to Elasticsearch when attempting to skip certificate validation. With this update, the Operator generates the correct TLS configuration even when tls.insecureSkipVerify is enabled. As a result, data can be sent successfully to Elasticsearch even when attempting to skip certificate validation. ( LOG-3838 ) Before this update, if the Cluster Logging Operator (CLO) was installed without the Elasticsearch Operator, the CLO pod would continuously display an error message related to the deletion of Elasticsearch. With this update, the CLO now performs additional checks before displaying any error messages. As a result, error messages related to Elasticsearch deletion are no longer displayed in the absence of the Elasticsearch Operator.( LOG-3763 ) 3.1.6.2. CVEs CVE-2022-4304 CVE-2022-4450 CVE-2023-0215 CVE-2023-0286 CVE-2023-0767 CVE-2023-23916 3.1.7. Logging 5.6.3 This release includes OpenShift Logging Bug Fix Release 5.6.3 . 3.1.7.1. Bug fixes Before this update, the operator stored gateway tenant secret information in a config map. With this update, the operator stores this information in a secret. ( LOG-3717 ) Before this update, the Fluentd collector did not capture OAuth login events stored in /var/log/auth-server/audit.log . With this update, Fluentd captures these OAuth login events, resolving the issue. ( LOG-3729 ) 3.1.7.2. CVEs CVE-2020-10735 CVE-2021-28861 CVE-2022-2873 CVE-2022-4415 CVE-2022-40897 CVE-2022-41222 CVE-2022-43945 CVE-2022-45061 CVE-2022-48303 3.1.8. Logging 5.6.2 This release includes OpenShift Logging Bug Fix Release 5.6.2 . 3.1.8.1. Bug fixes Before this update, the collector did not set level fields correctly based on priority for systemd logs. With this update, level fields are set correctly. ( LOG-3429 ) Before this update, the Operator incorrectly generated incompatibility warnings on OpenShift Container Platform 4.12 or later. With this update, the Operator max OpenShift Container Platform version value has been corrected, resolving the issue. ( LOG-3584 ) Before this update, creating a ClusterLogForwarder custom resource (CR) with an output value of default did not generate any errors. With this update, an error warning that this value is invalid generates appropriately. ( LOG-3437 ) Before this update, when the ClusterLogForwarder custom resource (CR) had multiple pipelines configured with one output set as default , the collector pods restarted. With this update, the logic for output validation has been corrected, resolving the issue. ( LOG-3559 ) Before this update, collector pods restarted after being created. With this update, the deployed collector does not restart on its own. ( LOG-3608 ) Before this update, patch releases removed versions of the Operators from the catalog. This made installing the old versions impossible. This update changes bundle configurations so that releases of the same minor version stay in the catalog. ( LOG-3635 ) 3.1.8.2. CVEs CVE-2022-23521 CVE-2022-40303 CVE-2022-40304 CVE-2022-41903 CVE-2022-47629 CVE-2023-21835 CVE-2023-21843 3.1.9. Logging 5.6.1 This release includes OpenShift Logging Bug Fix Release 5.6.1 . 3.1.9.1. Bug fixes Before this update, the compactor would report TLS certificate errors from communications with the querier when retention was active. With this update, the compactor and querier no longer communicate erroneously over HTTP. ( LOG-3494 ) Before this update, the Loki Operator would not retry setting the status of the LokiStack CR, which caused stale status information. With this update, the Operator retries status information updates on conflict. ( LOG-3496 ) Before this update, the Loki Operator Webhook server caused TLS errors when the kube-apiserver-operator Operator checked the webhook validity. With this update, the Loki Operator Webhook PKI is managed by the Operator Lifecycle Manager (OLM), resolving the issue. ( LOG-3510 ) Before this update, the LokiStack Gateway Labels Enforcer generated parsing errors for valid LogQL queries when using combined label filters with boolean expressions. With this update, the LokiStack LogQL implementation supports label filters with boolean expression and resolves the issue. ( LOG-3441 ), ( LOG-3397 ) Before this update, records written to Elasticsearch would fail if multiple label keys had the same prefix and some keys included dots. With this update, underscores replace dots in label keys, resolving the issue. ( LOG-3463 ) Before this update, the Red Hat OpenShift Logging Operator was not available for OpenShift Container Platform 4.10 clusters because of an incompatibility between OpenShift Container Platform console and the logging-view-plugin. With this update, the plugin is properly integrated with the OpenShift Container Platform 4.10 admin console. ( LOG-3447 ) Before this update the reconciliation of the ClusterLogForwarder custom resource would incorrectly report a degraded status of pipelines that reference the default logstore. With this update, the pipeline validates properly.( LOG-3477 ) 3.1.9.2. CVEs CVE-2021-46848 CVE-2022-3821 CVE-2022-35737 CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 CVE-2022-42898 CVE-2022-43680 CVE-2021-35065 CVE-2022-46175 3.1.10. Logging 5.6.0 This release includes OpenShift Logging Release 5.6 . 3.1.10.1. Deprecation notice In logging version 5.6, Fluentd is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Fluentd, you can use Vector instead. 3.1.10.2. Enhancements With this update, Logging is compliant with OpenShift Container Platform cluster-wide cryptographic policies. ( LOG-895 ) With this update, you can declare per-tenant, per-stream, and global policies retention policies through the LokiStack custom resource, ordered by priority. ( LOG-2695 ) With this update, Splunk is an available output option for log forwarding. ( LOG-2913 ) With this update, Vector replaces Fluentd as the default Collector. ( LOG-2222 ) With this update, the Developer role can access the per-project workload logs they are assigned to within the Log Console Plugin on clusters running OpenShift Container Platform 4.11 and higher. ( LOG-3388 ) With this update, logs from any source contain a field openshift.cluster_id , the unique identifier of the cluster in which the Operator is deployed. You can view the clusterID value with the command below. ( LOG-2715 ) USD oc get clusterversion/version -o jsonpath='{.spec.clusterID}{"\n"}' 3.1.10.3. Known Issues Before this update, Elasticsearch would reject logs if multiple label keys had the same prefix and some keys included the . character. This fixes the limitation of Elasticsearch by replacing . in the label keys with _ . As a workaround for this issue, remove the labels that cause errors, or add a namespace to the label. ( LOG-3463 ) 3.1.10.4. Bug fixes Before this update, if you deleted the Kibana Custom Resource, the OpenShift Container Platform web console continued displaying a link to Kibana. With this update, removing the Kibana Custom Resource also removes that link. ( LOG-2993 ) Before this update, a user was not able to view the application logs of namespaces they have access to. With this update, the Loki Operator automatically creates a cluster role and cluster role binding allowing users to read application logs. ( LOG-3072 ) Before this update, the Operator removed any custom outputs defined in the ClusterLogForwarder custom resource when using LokiStack as the default log storage. With this update, the Operator merges custom outputs with the default outputs when processing the ClusterLogForwarder custom resource. ( LOG-3090 ) Before this update, the CA key was used as the volume name for mounting the CA into Loki, causing error states when the CA Key included non-conforming characters, such as dots. With this update, the volume name is standardized to an internal string which resolves the issue. ( LOG-3331 ) Before this update, a default value set within the LokiStack Custom Resource Definition, caused an inability to create a LokiStack instance without a ReplicationFactor of 1 . With this update, the operator sets the actual value for the size used. ( LOG-3296 ) Before this update, Vector parsed the message field when JSON parsing was enabled without also defining structuredTypeKey or structuredTypeName values. With this update, a value is required for either structuredTypeKey or structuredTypeName when writing structured logs to Elasticsearch. ( LOG-3195 ) Before this update, the secret creation component of the Elasticsearch Operator modified internal secrets constantly. With this update, the existing secret is properly handled. ( LOG-3161 ) Before this update, the Operator could enter a loop of removing and recreating the collector daemonset while the Elasticsearch or Kibana deployments changed their status. With this update, a fix in the status handling of the Operator resolves the issue. ( LOG-3157 ) Before this update, Kibana had a fixed 24h OAuth cookie expiration time, which resulted in 401 errors in Kibana whenever the accessTokenInactivityTimeout field was set to a value lower than 24h . With this update, Kibana's OAuth cookie expiration time synchronizes to the accessTokenInactivityTimeout , with a default value of 24h . ( LOG-3129 ) Before this update, the Operators general pattern for reconciling resources was to try and create before attempting to get or update which would lead to constant HTTP 409 responses after creation. With this update, Operators first attempt to retrieve an object and only create or update it if it is either missing or not as specified. ( LOG-2919 ) Before this update, the .level and`.structure.level` fields in Fluentd could contain different values. With this update, the values are the same for each field. ( LOG-2819 ) Before this update, the Operator did not wait for the population of the trusted CA bundle and deployed the collector a second time once the bundle updated. With this update, the Operator waits briefly to see if the bundle has been populated before it continues the collector deployment. ( LOG-2789 ) Before this update, logging telemetry info appeared twice when reviewing metrics. With this update, logging telemetry info displays as expected. ( LOG-2315 ) Before this update, Fluentd pod logs contained a warning message after enabling the JSON parsing addition. With this update, that warning message does not appear. ( LOG-1806 ) Before this update, the must-gather script did not complete because oc needs a folder with write permission to build its cache. With this update, oc has write permissions to a folder, and the must-gather script completes successfully. ( LOG-3446 ) Before this update the log collector SCC could be superseded by other SCCs on the cluster, rendering the collector unusable. This update sets the priority of the log collector SCC so that it takes precedence over the others. ( LOG-3235 ) Before this update, Vector was missing the field sequence , which was added to fluentd as a way to deal with a lack of actual nanoseconds precision. With this update, the field openshift.sequence has been added to the event logs. ( LOG-3106 ) 3.1.10.5. CVEs CVE-2020-36518 CVE-2021-46848 CVE-2022-2879 CVE-2022-2880 CVE-2022-27664 CVE-2022-32190 CVE-2022-35737 CVE-2022-37601 CVE-2022-41715 CVE-2022-42003 CVE-2022-42004 CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 CVE-2022-42898 CVE-2022-43680 3.2. Getting started with logging 5.6 This overview of the logging deployment process is provided for ease of reference. It is not a substitute for full documentation. For new installations, Vector and LokiStack are recommended. Note As of logging version 5.5, you have the option of choosing from Fluentd or Vector collector implementations, and Elasticsearch or LokiStack as log stores. Documentation for logging is in the process of being updated to reflect these underlying component changes. Note The logging subsystem for Red Hat OpenShift is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Prerequisites LogStore preference: Elasticsearch or LokiStack Collector implementation preference: Fluentd or Vector Credentials for your log forwarding outputs Note As of logging version 5.4.3 the Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. Install the Operator for the logstore you'd like to use. For Elasticsearch , install the OpenShift Elasticsearch Operator . For LokiStack , install the Loki Operator . Create a LokiStack custom resource (CR) instance. Install the Red Hat OpenShift Logging Operator . Create a ClusterLogging custom resource (CR) instance. Select your Collector Implementation. Note As of logging version 5.6 Fluentd is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Fluentd, you can use Vector instead. Create a ClusterLogForwarder custom resource (CR) instance. Create a secret for the selected output pipeline. 3.3. Understanding logging The logging subsystem consists of these logical components: Collector - Reads container log data from each node and forwards log data to configured outputs. Store - Stores log data for analysis; the default output for the forwarder. Visualization - Graphical interface for searching, querying, and viewing stored logs. These components are managed by Operators and Custom Resource (CR) YAML files. The logging subsystem for Red Hat OpenShift collects container logs and node logs. These are categorized into types: application - Container logs generated by non-infrastructure containers. infrastructure - Container logs from namespaces kube-* and openshift-\* , and node logs from journald . audit - Logs from auditd , kube-apiserver , openshift-apiserver , and ovn if enabled. The logging collector is a daemonset that deploys pods to each OpenShift Container Platform node. System and infrastructure logs are generated by journald log messages from the operating system, the container runtime, and OpenShift Container Platform. Container logs are generated by containers running in pods running on the cluster. Each container generates a separate log stream. The collector collects the logs from these sources and forwards them internally or externally as configured in the ClusterLogForwarder custom resource. 3.4. Administering your logging deployment 3.4.1. Deploying Red Hat OpenShift Logging Operator using the web console You can use the OpenShift Container Platform web console to deploy the Red Hat OpenShift Logging Operator. Prerequisites The logging subsystem for Red Hat OpenShift is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Procedure To deploy the Red Hat OpenShift Logging Operator using the OpenShift Container Platform web console: Install the Red Hat OpenShift Logging Operator: In the OpenShift Container Platform web console, click Operators OperatorHub . Type Logging in the Filter by keyword field. Choose Red Hat OpenShift Logging from the list of available Operators, and click Install . Select stable or stable-5.y as the Update Channel . Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-X where X is the version of logging you have installed. Ensure that A specific namespace on the cluster is selected under Installation Mode . Ensure that Operator recommended namespace is openshift-logging under Installed Namespace . Select Enable Operator recommended cluster monitoring on this Namespace . Select an option for Update approval . The Automatic option allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual option requires a user with appropriate credentials to approve the Operator update. Select Enable or Disable for the Console plugin. Click Install . Verify that the Red Hat OpenShift Logging Operator is installed by switching to the Operators Installed Operators page. Ensure that Red Hat OpenShift Logging is listed in the openshift-logging project with a Status of Succeeded . Create a ClusterLogging instance. Note The form view of the web console does not include all available options. The YAML view is recommended for completing your setup. In the collection section, select a Collector Implementation. Note As of logging version 5.6 Fluentd is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Fluentd, you can use Vector instead. In the logStore section, select a type. Note As of logging version 5.4.3 the Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. Click Create . 3.4.2. Deploying the Loki Operator using the web console You can use the OpenShift Container Platform web console to install the Loki Operator. Prerequisites Supported Log Store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation) Procedure To install the Loki Operator using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, click Operators OperatorHub . Type Loki in the Filter by keyword field. Choose Loki Operator from the list of available Operators, and click Install . Select stable or stable-5.y as the Update Channel . Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-X where X is the version of logging you have installed. Ensure that All namespaces on the cluster is selected under Installation Mode . Ensure that openshift-operators-redhat is selected under Installed Namespace . Select Enable Operator recommended cluster monitoring on this Namespace . This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Select an option for Update approval . The Automatic option allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual option requires a user with appropriate credentials to approve the Operator update. Click Install . Verify that the LokiOperator installed by switching to the Operators Installed Operators page. Ensure that LokiOperator is listed with Status as Succeeded in all the projects. Create a Secret YAML file that uses the access_key_id and access_key_secret fields to specify your credentials and bucketnames , endpoint , and region to define the object storage location. AWS is used in the following example: apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging stringData: access_key_id: AKIAIOSFODNN7EXAMPLE access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1 Select Create instance under LokiStack on the Details tab. Then select YAML view . Paste in the following template, subsituting values where appropriate. apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: logging-loki-s3 3 type: s3 4 storageClassName: <storage_class_name> 5 tenants: mode: openshift-logging 1 Name should be logging-loki . 2 Select your Loki deployment size. 3 Define the secret used for your log storage. 4 Define corresponding storage type. 5 Enter the name of an existing storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed using oc get storageclasses . Apply the configuration: oc apply -f logging-loki.yaml Create or edit a ClusterLogging CR: apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki collection: type: vector Apply the configuration: oc apply -f cr-lokistack.yaml 3.4.3. Installing from OperatorHub using the CLI Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub using the CLI. Use the oc command to create or update a Subscription object. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Install the oc command to your local system. Procedure View the list of Operators available to the cluster from OperatorHub: USD oc get packagemanifests -n openshift-marketplace Example output NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m ... couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m ... etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m ... Note the catalog for your desired Operator. Inspect your desired Operator to verify its supported install modes and available channels: USD oc describe packagemanifests <operator_name> -n openshift-marketplace An Operator group, defined by an OperatorGroup object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group. The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the AllNamespaces or SingleNamespace mode. If the Operator you intend to install uses the AllNamespaces , then the openshift-operators namespace already has an appropriate Operator group in place. However, if the Operator uses the SingleNamespace mode and you do not already have an appropriate Operator group in place, you must create one. Note The web console version of this procedure handles the creation of the OperatorGroup and Subscription objects automatically behind the scenes for you when choosing SingleNamespace mode. Create an OperatorGroup object YAML file, for example operatorgroup.yaml : Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace> Create the OperatorGroup object: USD oc apply -f operatorgroup.yaml Create a Subscription object YAML file to subscribe a namespace to an Operator, for example sub.yaml : Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: "-v=10" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: "Exists" resources: 11 requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" nodeSelector: 12 foo: bar 1 For AllNamespaces install mode usage, specify the openshift-operators namespace. Otherwise, specify the relevant single namespace for SingleNamespace install mode usage. 2 Name of the channel to subscribe to. 3 Name of the Operator to subscribe to. 4 Name of the catalog source that provides the Operator. 5 Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources. 6 The env parameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM. 7 The envFrom parameter defines a list of sources to populate Environment Variables in the container. 8 The volumes parameter defines a list of Volumes that must exist on the pod created by OLM. 9 The volumeMounts parameter defines a list of VolumeMounts that must exist in all containers in the pod created by OLM. If a volumeMount references a volume that does not exist, OLM fails to deploy the Operator. 10 The tolerations parameter defines a list of Tolerations for the pod created by OLM. 11 The resources parameter defines resource constraints for all the containers in the pod created by OLM. 12 The nodeSelector parameter defines a NodeSelector for the pod created by OLM. Create the Subscription object: USD oc apply -f sub.yaml At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation. 3.4.4. Deleting Operators from a cluster using the web console Cluster administrators can delete installed Operators from a selected namespace by using the web console. Prerequisites Access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions. Procedure Navigate to the Operators Installed Operators page. Scroll or enter a keyword into the Filter by name field to find the Operator that you want to remove. Then, click on it. On the right side of the Operator Details page, select Uninstall Operator from the Actions list. An Uninstall Operator? dialog box is displayed. Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates. Note This action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs. 3.4.5. Deleting Operators from a cluster using the CLI Cluster administrators can delete installed Operators from a selected namespace by using the CLI. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. oc command installed on workstation. Procedure Check the current version of the subscribed Operator (for example, jaeger ) in the currentCSV field: USD oc get subscription jaeger -n openshift-operators -o yaml | grep currentCSV Example output currentCSV: jaeger-operator.v1.8.2 Delete the subscription (for example, jaeger ): USD oc delete subscription jaeger -n openshift-operators Example output subscription.operators.coreos.com "jaeger" deleted Delete the CSV for the Operator in the target namespace using the currentCSV value from the step: USD oc delete clusterserviceversion jaeger-operator.v1.8.2 -n openshift-operators Example output clusterserviceversion.operators.coreos.com "jaeger-operator.v1.8.2" deleted 3.5. Logging References 3.5.1. Collector features Output Protocol Tested with Fluentd Vector Cloudwatch REST over HTTP(S) [✓] [✓] Elasticsearch v6 v6.8.1 [✓] [✓] Elasticsearch v7 v7.12.2, 7.17.7 [✓] [✓] Elasticsearch v8 v8.4.3 [✓] Fluent Forward Fluentd forward v1 Fluentd 1.14.6, Logstash 7.10.1 [✓] Google Cloud Logging [✓] HTTP HTTP 1.1 Fluentd 1.14.6, Vector 0.21 Kafka Kafka 0.11 Kafka 2.4.1, 2.7.0, 3.3.1 [✓] [✓] Loki REST over HTTP(S) Loki 2.3.0, 2.7 [✓] [✓] Splunk HEC v8.2.9, 9.0.0 [✓] Syslog RFC3164, RFC5424 Rsyslog 8.37.0-9.el7 [✓] Table 3.1. Log Sources Feature Fluentd Vector App container logs [✓] [✓] App-specific routing [✓] [✓] App-specific routing by namespace [✓] [✓] Infra container logs [✓] [✓] Infra journal logs [✓] [✓] Kube API audit logs [✓] [✓] OpenShift API audit logs [✓] [✓] Open Virtual Network (OVN) audit logs [✓] [✓] Table 3.2. Authorization and Authentication Feature Fluentd Vector Elasticsearch certificates [✓] [✓] Elasticsearch username / password [✓] [✓] Cloudwatch keys [✓] [✓] Cloudwatch STS [✓] [✓] Kafka certificates [✓] [✓] Kafka username / password [✓] [✓] Kafka SASL [✓] [✓] Loki bearer token [✓] [✓] Table 3.3. Normalizations and Transformations Feature Fluentd Vector Viaq data model - app [✓] [✓] Viaq data model - infra [✓] [✓] Viaq data model - infra(journal) [✓] [✓] Viaq data model - Linux audit [✓] [✓] Viaq data model - kube-apiserver audit [✓] [✓] Viaq data model - OpenShift API audit [✓] [✓] Viaq data model - OVN [✓] [✓] Loglevel Normalization [✓] [✓] JSON parsing [✓] [✓] Structured Index [✓] [✓] Multiline error detection [✓] Multicontainer / split indices [✓] [✓] Flatten labels [✓] [✓] CLF static labels [✓] [✓] Table 3.4. Tuning Feature Fluentd Vector Fluentd readlinelimit [✓] Fluentd buffer [✓] - chunklimitsize [✓] - totallimitsize [✓] - overflowaction [✓] - flushthreadcount [✓] - flushmode [✓] - flushinterval [✓] - retrywait [✓] - retrytype [✓] - retrymaxinterval [✓] - retrytimeout [✓] Table 3.5. Visibility Feature Fluentd Vector Metrics [✓] [✓] Dashboard [✓] [✓] Alerts [✓] Table 3.6. Miscellaneous Feature Fluentd Vector Global proxy support [✓] [✓] x86 support [✓] [✓] ARM support [✓] [✓] IBM Power support [✓] [✓] IBM Z support [✓] [✓] IPv6 support [✓] [✓] Log event buffering [✓] Disconnected Cluster [✓] [✓] Additional resources Vector Documentation 3.5.2. Logging 5.6 API reference 3.5.2.1. ClusterLogForwarder ClusterLogForwarder is an API to configure forwarding logs. You configure forwarding by specifying a list of pipelines , which forward from a set of named inputs to a set of named outputs. There are built-in input names for common log categories, and you can define custom inputs to do additional filtering. There is a built-in output name for the default openshift log store, but you can define your own outputs with a URL and other connection information to forward logs to other stores or processors, inside or outside the cluster. For more details see the documentation on the API fields. Property Type Description spec object Specification of the desired behavior of ClusterLogForwarder status object Status of the ClusterLogForwarder 3.5.2.1.1. .spec 3.5.2.1.1.1. Description ClusterLogForwarderSpec defines how logs should be forwarded to remote targets. 3.5.2.1.1.1.1. Type object Property Type Description inputs array (optional) Inputs are named filters for log messages to be forwarded. outputDefaults object (optional) DEPRECATED OutputDefaults specify forwarder config explicitly for the default store. outputs array (optional) Outputs are named destinations for log messages. pipelines array Pipelines forward the messages selected by a set of inputs to a set of outputs. 3.5.2.1.2. .spec.inputs[] 3.5.2.1.2.1. Description InputSpec defines a selector of log messages. 3.5.2.1.2.1.1. Type array Property Type Description application object (optional) Application, if present, enables named set of application logs that name string Name used to refer to the input of a pipeline . 3.5.2.1.3. .spec.inputs[].application 3.5.2.1.3.1. Description Application log selector. All conditions in the selector must be satisfied (logical AND) to select logs. 3.5.2.1.3.1.1. Type object Property Type Description namespaces array (optional) Namespaces from which to collect application logs. selector object (optional) Selector for logs from pods with matching labels. 3.5.2.1.4. .spec.inputs[].application.namespaces[] 3.5.2.1.4.1. Description 3.5.2.1.4.1.1. Type array 3.5.2.1.5. .spec.inputs[].application.selector 3.5.2.1.5.1. Description A label selector is a label query over a set of resources. 3.5.2.1.5.1.1. Type object Property Type Description matchLabels object (optional) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels 3.5.2.1.6. .spec.inputs[].application.selector.matchLabels 3.5.2.1.6.1. Description 3.5.2.1.6.1.1. Type object 3.5.2.1.7. .spec.outputDefaults 3.5.2.1.7.1. Description 3.5.2.1.7.1.1. Type object Property Type Description elasticsearch object (optional) Elasticsearch OutputSpec default values 3.5.2.1.8. .spec.outputDefaults.elasticsearch 3.5.2.1.8.1. Description ElasticsearchStructuredSpec is spec related to structured log changes to determine the elasticsearch index 3.5.2.1.8.1.1. Type object Property Type Description enableStructuredContainerLogs bool (optional) EnableStructuredContainerLogs enables multi-container structured logs to allow structuredTypeKey string (optional) StructuredTypeKey specifies the metadata key to be used as name of elasticsearch index structuredTypeName string (optional) StructuredTypeName specifies the name of elasticsearch schema 3.5.2.1.9. .spec.outputs[] 3.5.2.1.9.1. Description Output defines a destination for log messages. 3.5.2.1.9.1.1. Type array Property Type Description syslog object (optional) fluentdForward object (optional) elasticsearch object (optional) kafka object (optional) cloudwatch object (optional) loki object (optional) googleCloudLogging object (optional) splunk object (optional) name string Name used to refer to the output from a pipeline . secret object (optional) Secret for authentication. tls object TLS contains settings for controlling options on TLS client connections. type string Type of output plugin. url string (optional) URL to send log records to. 3.5.2.1.10. .spec.outputs[].secret 3.5.2.1.10.1. Description OutputSecretSpec is a secret reference containing name only, no namespace. 3.5.2.1.10.1.1. Type object Property Type Description name string Name of a secret in the namespace configured for log forwarder secrets. 3.5.2.1.11. .spec.outputs[].tls 3.5.2.1.11.1. Description OutputTLSSpec contains options for TLS connections that are agnostic to the output type. 3.5.2.1.11.1.1. Type object Property Type Description insecureSkipVerify bool If InsecureSkipVerify is true, then the TLS client will be configured to ignore errors with certificates. 3.5.2.1.12. .spec.pipelines[] 3.5.2.1.12.1. Description PipelinesSpec link a set of inputs to a set of outputs. 3.5.2.1.12.1.1. Type array Property Type Description detectMultilineErrors bool (optional) DetectMultilineErrors enables multiline error detection of container logs inputRefs array InputRefs lists the names ( input.name ) of inputs to this pipeline. labels object (optional) Labels applied to log records passing through this pipeline. name string (optional) Name is optional, but must be unique in the pipelines list if provided. outputRefs array OutputRefs lists the names ( output.name ) of outputs from this pipeline. parse string (optional) Parse enables parsing of log entries into structured logs 3.5.2.1.13. .spec.pipelines[].inputRefs[] 3.5.2.1.13.1. Description 3.5.2.1.13.1.1. Type array 3.5.2.1.14. .spec.pipelines[].labels 3.5.2.1.14.1. Description 3.5.2.1.14.1.1. Type object 3.5.2.1.15. .spec.pipelines[].outputRefs[] 3.5.2.1.15.1. Description 3.5.2.1.15.1.1. Type array 3.5.2.1.16. .status 3.5.2.1.16.1. Description ClusterLogForwarderStatus defines the observed state of ClusterLogForwarder 3.5.2.1.16.1.1. Type object Property Type Description conditions object Conditions of the log forwarder. inputs Conditions Inputs maps input name to condition of the input. outputs Conditions Outputs maps output name to condition of the output. pipelines Conditions Pipelines maps pipeline name to condition of the pipeline. 3.5.2.1.17. .status.conditions 3.5.2.1.17.1. Description 3.5.2.1.17.1.1. Type object 3.5.2.1.18. .status.inputs 3.5.2.1.18.1. Description 3.5.2.1.18.1.1. Type Conditions 3.5.2.1.19. .status.outputs 3.5.2.1.19.1. Description 3.5.2.1.19.1.1. Type Conditions 3.5.2.1.20. .status.pipelines 3.5.2.1.20.1. Description 3.5.2.1.20.1.1. Type Conditions== ClusterLogging A Red Hat OpenShift Logging instance. ClusterLogging is the Schema for the clusterloggings API Property Type Description spec object Specification of the desired behavior of ClusterLogging status object Status defines the observed state of ClusterLogging 3.5.2.1.21. .spec 3.5.2.1.21.1. Description ClusterLoggingSpec defines the desired state of ClusterLogging 3.5.2.1.21.1.1. Type object Property Type Description collection object Specification of the Collection component for the cluster curation object (DEPRECATED) (optional) Deprecated. Specification of the Curation component for the cluster forwarder object (DEPRECATED) (optional) Deprecated. Specification for Forwarder component for the cluster logStore object (optional) Specification of the Log Storage component for the cluster managementState string (optional) Indicator if the resource is 'Managed' or 'Unmanaged' by the operator visualization object (optional) Specification of the Visualization component for the cluster 3.5.2.1.22. .spec.collection 3.5.2.1.22.1. Description This is the struct that will contain information pertinent to Log and event collection 3.5.2.1.22.1.1. Type object Property Type Description resources object (optional) The resource requirements for the collector nodeSelector object (optional) Define which Nodes the Pods are scheduled on. tolerations array (optional) Define the tolerations the Pods will accept fluentd object (optional) Fluentd represents the configuration for forwarders of type fluentd. logs object (DEPRECATED) (optional) Deprecated. Specification of Log Collection for the cluster type string (optional) The type of Log Collection to configure 3.5.2.1.23. .spec.collection.fluentd 3.5.2.1.23.1. Description FluentdForwarderSpec represents the configuration for forwarders of type fluentd. 3.5.2.1.23.1.1. Type object Property Type Description buffer object inFile object 3.5.2.1.24. .spec.collection.fluentd.buffer 3.5.2.1.24.1. Description FluentdBufferSpec represents a subset of fluentd buffer parameters to tune the buffer configuration for all fluentd outputs. It supports a subset of parameters to configure buffer and queue sizing, flush operations and retry flushing. For general parameters refer to: https://docs.fluentd.org/configuration/buffer-section#buffering-parameters For flush parameters refer to: https://docs.fluentd.org/configuration/buffer-section#flushing-parameters For retry parameters refer to: https://docs.fluentd.org/configuration/buffer-section#retries-parameters 3.5.2.1.24.1.1. Type object Property Type Description chunkLimitSize string (optional) ChunkLimitSize represents the maximum size of each chunk. Events will be flushInterval string (optional) FlushInterval represents the time duration to wait between two consecutive flush flushMode string (optional) FlushMode represents the mode of the flushing thread to write chunks. The mode flushThreadCount int (optional) FlushThreadCount reprents the number of threads used by the fluentd buffer overflowAction string (optional) OverflowAction represents the action for the fluentd buffer plugin to retryMaxInterval string (optional) RetryMaxInterval represents the maximum time interval for exponential backoff retryTimeout string (optional) RetryTimeout represents the maximum time interval to attempt retries before giving up retryType string (optional) RetryType represents the type of retrying flush operations. Flush operations can retryWait string (optional) RetryWait represents the time duration between two consecutive retries to flush totalLimitSize string (optional) TotalLimitSize represents the threshold of node space allowed per fluentd 3.5.2.1.25. .spec.collection.fluentd.inFile 3.5.2.1.25.1. Description FluentdInFileSpec represents a subset of fluentd in-tail plugin parameters to tune the configuration for all fluentd in-tail inputs. For general parameters refer to: https://docs.fluentd.org/input/tail#parameters 3.5.2.1.25.1.1. Type object Property Type Description readLinesLimit int (optional) ReadLinesLimit represents the number of lines to read with each I/O operation 3.5.2.1.26. .spec.collection.logs 3.5.2.1.26.1. Description 3.5.2.1.26.1.1. Type object Property Type Description fluentd object Specification of the Fluentd Log Collection component type string The type of Log Collection to configure 3.5.2.1.27. .spec.collection.logs.fluentd 3.5.2.1.27.1. Description CollectorSpec is spec to define scheduling and resources for a collector 3.5.2.1.27.1.1. Type object Property Type Description nodeSelector object (optional) Define which Nodes the Pods are scheduled on. resources object (optional) The resource requirements for the collector tolerations array (optional) Define the tolerations the Pods will accept 3.5.2.1.28. .spec.collection.logs.fluentd.nodeSelector 3.5.2.1.28.1. Description 3.5.2.1.28.1.1. Type object 3.5.2.1.29. .spec.collection.logs.fluentd.resources 3.5.2.1.29.1. Description 3.5.2.1.29.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 3.5.2.1.30. .spec.collection.logs.fluentd.resources.limits 3.5.2.1.30.1. Description 3.5.2.1.30.1.1. Type object 3.5.2.1.31. .spec.collection.logs.fluentd.resources.requests 3.5.2.1.31.1. Description 3.5.2.1.31.1.1. Type object 3.5.2.1.32. .spec.collection.logs.fluentd.tolerations[] 3.5.2.1.32.1. Description 3.5.2.1.32.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 3.5.2.1.33. .spec.collection.logs.fluentd.tolerations[].tolerationSeconds 3.5.2.1.33.1. Description 3.5.2.1.33.1.1. Type int 3.5.2.1.34. .spec.curation 3.5.2.1.34.1. Description This is the struct that will contain information pertinent to Log curation (Curator) 3.5.2.1.34.1.1. Type object Property Type Description curator object The specification of curation to configure type string The kind of curation to configure 3.5.2.1.35. .spec.curation.curator 3.5.2.1.35.1. Description 3.5.2.1.35.1.1. Type object Property Type Description nodeSelector object Define which Nodes the Pods are scheduled on. resources object (optional) The resource requirements for Curator schedule string The cron schedule that the Curator job is run. Defaults to "30 3 * * *" tolerations array 3.5.2.1.36. .spec.curation.curator.nodeSelector 3.5.2.1.36.1. Description 3.5.2.1.36.1.1. Type object 3.5.2.1.37. .spec.curation.curator.resources 3.5.2.1.37.1. Description 3.5.2.1.37.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 3.5.2.1.38. .spec.curation.curator.resources.limits 3.5.2.1.38.1. Description 3.5.2.1.38.1.1. Type object 3.5.2.1.39. .spec.curation.curator.resources.requests 3.5.2.1.39.1. Description 3.5.2.1.39.1.1. Type object 3.5.2.1.40. .spec.curation.curator.tolerations[] 3.5.2.1.40.1. Description 3.5.2.1.40.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 3.5.2.1.41. .spec.curation.curator.tolerations[].tolerationSeconds 3.5.2.1.41.1. Description 3.5.2.1.41.1.1. Type int 3.5.2.1.42. .spec.forwarder 3.5.2.1.42.1. Description ForwarderSpec contains global tuning parameters for specific forwarder implementations. This field is not required for general use, it allows performance tuning by users familiar with the underlying forwarder technology. Currently supported: fluentd . 3.5.2.1.42.1.1. Type object Property Type Description fluentd object 3.5.2.1.43. .spec.forwarder.fluentd 3.5.2.1.43.1. Description FluentdForwarderSpec represents the configuration for forwarders of type fluentd. 3.5.2.1.43.1.1. Type object Property Type Description buffer object inFile object 3.5.2.1.44. .spec.forwarder.fluentd.buffer 3.5.2.1.44.1. Description FluentdBufferSpec represents a subset of fluentd buffer parameters to tune the buffer configuration for all fluentd outputs. It supports a subset of parameters to configure buffer and queue sizing, flush operations and retry flushing. For general parameters refer to: https://docs.fluentd.org/configuration/buffer-section#buffering-parameters For flush parameters refer to: https://docs.fluentd.org/configuration/buffer-section#flushing-parameters For retry parameters refer to: https://docs.fluentd.org/configuration/buffer-section#retries-parameters 3.5.2.1.44.1.1. Type object Property Type Description chunkLimitSize string (optional) ChunkLimitSize represents the maximum size of each chunk. Events will be flushInterval string (optional) FlushInterval represents the time duration to wait between two consecutive flush flushMode string (optional) FlushMode represents the mode of the flushing thread to write chunks. The mode flushThreadCount int (optional) FlushThreadCount reprents the number of threads used by the fluentd buffer overflowAction string (optional) OverflowAction represents the action for the fluentd buffer plugin to retryMaxInterval string (optional) RetryMaxInterval represents the maximum time interval for exponential backoff retryTimeout string (optional) RetryTimeout represents the maximum time interval to attempt retries before giving up retryType string (optional) RetryType represents the type of retrying flush operations. Flush operations can retryWait string (optional) RetryWait represents the time duration between two consecutive retries to flush totalLimitSize string (optional) TotalLimitSize represents the threshold of node space allowed per fluentd 3.5.2.1.45. .spec.forwarder.fluentd.inFile 3.5.2.1.45.1. Description FluentdInFileSpec represents a subset of fluentd in-tail plugin parameters to tune the configuration for all fluentd in-tail inputs. For general parameters refer to: https://docs.fluentd.org/input/tail#parameters 3.5.2.1.45.1.1. Type object Property Type Description readLinesLimit int (optional) ReadLinesLimit represents the number of lines to read with each I/O operation 3.5.2.1.46. .spec.logStore 3.5.2.1.46.1. Description The LogStoreSpec contains information about how logs are stored. 3.5.2.1.46.1.1. Type object Property Type Description elasticsearch object Specification of the Elasticsearch Log Store component lokistack object LokiStack contains information about which LokiStack to use for log storage if Type is set to LogStoreTypeLokiStack. retentionPolicy object (optional) Retention policy defines the maximum age for an index after which it should be deleted type string The Type of Log Storage to configure. The operator currently supports either using ElasticSearch 3.5.2.1.47. .spec.logStore.elasticsearch 3.5.2.1.47.1. Description 3.5.2.1.47.1.1. Type object Property Type Description nodeCount int Number of nodes to deploy for Elasticsearch nodeSelector object Define which Nodes the Pods are scheduled on. proxy object Specification of the Elasticsearch Proxy component redundancyPolicy string (optional) resources object (optional) The resource requirements for Elasticsearch storage object (optional) The storage specification for Elasticsearch data nodes tolerations array 3.5.2.1.48. .spec.logStore.elasticsearch.nodeSelector 3.5.2.1.48.1. Description 3.5.2.1.48.1.1. Type object 3.5.2.1.49. .spec.logStore.elasticsearch.proxy 3.5.2.1.49.1. Description 3.5.2.1.49.1.1. Type object Property Type Description resources object 3.5.2.1.50. .spec.logStore.elasticsearch.proxy.resources 3.5.2.1.50.1. Description 3.5.2.1.50.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 3.5.2.1.51. .spec.logStore.elasticsearch.proxy.resources.limits 3.5.2.1.51.1. Description 3.5.2.1.51.1.1. Type object 3.5.2.1.52. .spec.logStore.elasticsearch.proxy.resources.requests 3.5.2.1.52.1. Description 3.5.2.1.52.1.1. Type object 3.5.2.1.53. .spec.logStore.elasticsearch.resources 3.5.2.1.53.1. Description 3.5.2.1.53.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 3.5.2.1.54. .spec.logStore.elasticsearch.resources.limits 3.5.2.1.54.1. Description 3.5.2.1.54.1.1. Type object 3.5.2.1.55. .spec.logStore.elasticsearch.resources.requests 3.5.2.1.55.1. Description 3.5.2.1.55.1.1. Type object 3.5.2.1.56. .spec.logStore.elasticsearch.storage 3.5.2.1.56.1. Description 3.5.2.1.56.1.1. Type object Property Type Description size object The max storage capacity for the node to provision. storageClassName string (optional) The name of the storage class to use with creating the node's PVC. 3.5.2.1.57. .spec.logStore.elasticsearch.storage.size 3.5.2.1.57.1. Description 3.5.2.1.57.1.1. Type object Property Type Description Format string Change Format at will. See the comment for Canonicalize for d object d is the quantity in inf.Dec form if d.Dec != nil i int i is the quantity in int64 scaled form, if d.Dec == nil s string s is the generated value of this quantity to avoid recalculation 3.5.2.1.58. .spec.logStore.elasticsearch.storage.size.d 3.5.2.1.58.1. Description 3.5.2.1.58.1.1. Type object Property Type Description Dec object 3.5.2.1.59. .spec.logStore.elasticsearch.storage.size.d.Dec 3.5.2.1.59.1. Description 3.5.2.1.59.1.1. Type object Property Type Description scale int unscaled object 3.5.2.1.60. .spec.logStore.elasticsearch.storage.size.d.Dec.unscaled 3.5.2.1.60.1. Description 3.5.2.1.60.1.1. Type object Property Type Description abs Word sign neg bool 3.5.2.1.61. .spec.logStore.elasticsearch.storage.size.d.Dec.unscaled.abs 3.5.2.1.61.1. Description 3.5.2.1.61.1.1. Type Word 3.5.2.1.62. .spec.logStore.elasticsearch.storage.size.i 3.5.2.1.62.1. Description 3.5.2.1.62.1.1. Type int Property Type Description scale int value int 3.5.2.1.63. .spec.logStore.elasticsearch.tolerations[] 3.5.2.1.63.1. Description 3.5.2.1.63.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 3.5.2.1.64. .spec.logStore.elasticsearch.tolerations[].tolerationSeconds 3.5.2.1.64.1. Description 3.5.2.1.64.1.1. Type int 3.5.2.1.65. .spec.logStore.lokistack 3.5.2.1.65.1. Description LokiStackStoreSpec is used to set up cluster-logging to use a LokiStack as logging storage. It points to an existing LokiStack in the same namespace. 3.5.2.1.65.1.1. Type object Property Type Description name string Name of the LokiStack resource. 3.5.2.1.66. .spec.logStore.retentionPolicy 3.5.2.1.66.1. Description 3.5.2.1.66.1.1. Type object Property Type Description application object audit object infra object 3.5.2.1.67. .spec.logStore.retentionPolicy.application 3.5.2.1.67.1. Description 3.5.2.1.67.1.1. Type object Property Type Description diskThresholdPercent int (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) maxAge string (optional) namespaceSpec array (optional) The per namespace specification to delete documents older than a given minimum age pruneNamespacesInterval string (optional) How often to run a new prune-namespaces job 3.5.2.1.68. .spec.logStore.retentionPolicy.application.namespaceSpec[] 3.5.2.1.68.1. Description 3.5.2.1.68.1.1. Type array Property Type Description minAge string (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) namespace string Target Namespace to delete logs older than MinAge (defaults to 7d) 3.5.2.1.69. .spec.logStore.retentionPolicy.audit 3.5.2.1.69.1. Description 3.5.2.1.69.1.1. Type object Property Type Description diskThresholdPercent int (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) maxAge string (optional) namespaceSpec array (optional) The per namespace specification to delete documents older than a given minimum age pruneNamespacesInterval string (optional) How often to run a new prune-namespaces job 3.5.2.1.70. .spec.logStore.retentionPolicy.audit.namespaceSpec[] 3.5.2.1.70.1. Description 3.5.2.1.70.1.1. Type array Property Type Description minAge string (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) namespace string Target Namespace to delete logs older than MinAge (defaults to 7d) 3.5.2.1.71. .spec.logStore.retentionPolicy.infra 3.5.2.1.71.1. Description 3.5.2.1.71.1.1. Type object Property Type Description diskThresholdPercent int (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) maxAge string (optional) namespaceSpec array (optional) The per namespace specification to delete documents older than a given minimum age pruneNamespacesInterval string (optional) How often to run a new prune-namespaces job 3.5.2.1.72. .spec.logStore.retentionPolicy.infra.namespaceSpec[] 3.5.2.1.72.1. Description 3.5.2.1.72.1.1. Type array Property Type Description minAge string (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) namespace string Target Namespace to delete logs older than MinAge (defaults to 7d) 3.5.2.1.73. .spec.visualization 3.5.2.1.73.1. Description This is the struct that will contain information pertinent to Log visualization (Kibana) 3.5.2.1.73.1.1. Type object Property Type Description kibana object Specification of the Kibana Visualization component type string The type of Visualization to configure 3.5.2.1.74. .spec.visualization.kibana 3.5.2.1.74.1. Description 3.5.2.1.74.1.1. Type object Property Type Description nodeSelector object Define which Nodes the Pods are scheduled on. proxy object Specification of the Kibana Proxy component replicas int Number of instances to deploy for a Kibana deployment resources object (optional) The resource requirements for Kibana tolerations array 3.5.2.1.75. .spec.visualization.kibana.nodeSelector 3.5.2.1.75.1. Description 3.5.2.1.75.1.1. Type object 3.5.2.1.76. .spec.visualization.kibana.proxy 3.5.2.1.76.1. Description 3.5.2.1.76.1.1. Type object Property Type Description resources object 3.5.2.1.77. .spec.visualization.kibana.proxy.resources 3.5.2.1.77.1. Description 3.5.2.1.77.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 3.5.2.1.78. .spec.visualization.kibana.proxy.resources.limits 3.5.2.1.78.1. Description 3.5.2.1.78.1.1. Type object 3.5.2.1.79. .spec.visualization.kibana.proxy.resources.requests 3.5.2.1.79.1. Description 3.5.2.1.79.1.1. Type object 3.5.2.1.80. .spec.visualization.kibana.replicas 3.5.2.1.80.1. Description 3.5.2.1.80.1.1. Type int 3.5.2.1.81. .spec.visualization.kibana.resources 3.5.2.1.81.1. Description 3.5.2.1.81.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 3.5.2.1.82. .spec.visualization.kibana.resources.limits 3.5.2.1.82.1. Description 3.5.2.1.82.1.1. Type object 3.5.2.1.83. .spec.visualization.kibana.resources.requests 3.5.2.1.83.1. Description 3.5.2.1.83.1.1. Type object 3.5.2.1.84. .spec.visualization.kibana.tolerations[] 3.5.2.1.84.1. Description 3.5.2.1.84.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 3.5.2.1.85. .spec.visualization.kibana.tolerations[].tolerationSeconds 3.5.2.1.85.1. Description 3.5.2.1.85.1.1. Type int 3.5.2.1.86. .status 3.5.2.1.86.1. Description ClusterLoggingStatus defines the observed state of ClusterLogging 3.5.2.1.86.1.1. Type object Property Type Description collection object (optional) conditions object (optional) curation object (optional) logStore object (optional) visualization object (optional) 3.5.2.1.87. .status.collection 3.5.2.1.87.1. Description 3.5.2.1.87.1.1. Type object Property Type Description logs object (optional) 3.5.2.1.88. .status.collection.logs 3.5.2.1.88.1. Description 3.5.2.1.88.1.1. Type object Property Type Description fluentdStatus object (optional) 3.5.2.1.89. .status.collection.logs.fluentdStatus 3.5.2.1.89.1. Description 3.5.2.1.89.1.1. Type object Property Type Description clusterCondition object (optional) daemonSet string (optional) nodes object (optional) pods string (optional) 3.5.2.1.90. .status.collection.logs.fluentdStatus.clusterCondition 3.5.2.1.90.1. Description operator-sdk generate crds does not allow map-of-slice, must use a named type. 3.5.2.1.90.1.1. Type object 3.5.2.1.91. .status.collection.logs.fluentdStatus.nodes 3.5.2.1.91.1. Description 3.5.2.1.91.1.1. Type object 3.5.2.1.92. .status.conditions 3.5.2.1.92.1. Description 3.5.2.1.92.1.1. Type object 3.5.2.1.93. .status.curation 3.5.2.1.93.1. Description 3.5.2.1.93.1.1. Type object Property Type Description curatorStatus array (optional) 3.5.2.1.94. .status.curation.curatorStatus[] 3.5.2.1.94.1. Description 3.5.2.1.94.1.1. Type array Property Type Description clusterCondition object (optional) cronJobs string (optional) schedules string (optional) suspended bool (optional) 3.5.2.1.95. .status.curation.curatorStatus[].clusterCondition 3.5.2.1.95.1. Description operator-sdk generate crds does not allow map-of-slice, must use a named type. 3.5.2.1.95.1.1. Type object 3.5.2.1.96. .status.logStore 3.5.2.1.96.1. Description 3.5.2.1.96.1.1. Type object Property Type Description elasticsearchStatus array (optional) 3.5.2.1.97. .status.logStore.elasticsearchStatus[] 3.5.2.1.97.1. Description 3.5.2.1.97.1.1. Type array Property Type Description cluster object (optional) clusterConditions object (optional) clusterHealth string (optional) clusterName string (optional) deployments array (optional) nodeConditions object (optional) nodeCount int (optional) pods object (optional) replicaSets array (optional) shardAllocationEnabled string (optional) statefulSets array (optional) 3.5.2.1.98. .status.logStore.elasticsearchStatus[].cluster 3.5.2.1.98.1. Description 3.5.2.1.98.1.1. Type object Property Type Description activePrimaryShards int The number of Active Primary Shards for the Elasticsearch Cluster activeShards int The number of Active Shards for the Elasticsearch Cluster initializingShards int The number of Initializing Shards for the Elasticsearch Cluster numDataNodes int The number of Data Nodes for the Elasticsearch Cluster numNodes int The number of Nodes for the Elasticsearch Cluster pendingTasks int relocatingShards int The number of Relocating Shards for the Elasticsearch Cluster status string The current Status of the Elasticsearch Cluster unassignedShards int The number of Unassigned Shards for the Elasticsearch Cluster 3.5.2.1.99. .status.logStore.elasticsearchStatus[].clusterConditions 3.5.2.1.99.1. Description 3.5.2.1.99.1.1. Type object 3.5.2.1.100. .status.logStore.elasticsearchStatus[].deployments[] 3.5.2.1.100.1. Description 3.5.2.1.100.1.1. Type array 3.5.2.1.101. .status.logStore.elasticsearchStatus[].nodeConditions 3.5.2.1.101.1. Description 3.5.2.1.101.1.1. Type object 3.5.2.1.102. .status.logStore.elasticsearchStatus[].pods 3.5.2.1.102.1. Description 3.5.2.1.102.1.1. Type object 3.5.2.1.103. .status.logStore.elasticsearchStatus[].replicaSets[] 3.5.2.1.103.1. Description 3.5.2.1.103.1.1. Type array 3.5.2.1.104. .status.logStore.elasticsearchStatus[].statefulSets[] 3.5.2.1.104.1. Description 3.5.2.1.104.1.1. Type array 3.5.2.1.105. .status.visualization 3.5.2.1.105.1. Description 3.5.2.1.105.1.1. Type object Property Type Description kibanaStatus array (optional) 3.5.2.1.106. .status.visualization.kibanaStatus[] 3.5.2.1.106.1. Description 3.5.2.1.106.1.1. Type array Property Type Description clusterCondition object (optional) deployment string (optional) pods string (optional) The status for each of the Kibana pods for the Visualization component replicaSets array (optional) replicas int (optional) 3.5.2.1.107. .status.visualization.kibanaStatus[].clusterCondition 3.5.2.1.107.1. Description 3.5.2.1.107.1.1. Type object 3.5.2.1.108. .status.visualization.kibanaStatus[].replicaSets[] 3.5.2.1.108.1. Description 3.5.2.1.108.1.1. Type array
[ "tls.verify_certificate = false tls.verify_hostname = false", "oc get clusterversion/version -o jsonpath='{.spec.clusterID}{\"\\n\"}'", "apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging stringData: access_key_id: AKIAIOSFODNN7EXAMPLE access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: logging-loki-s3 3 type: s3 4 storageClassName: <storage_class_name> 5 tenants: mode: openshift-logging", "apply -f logging-loki.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki collection: type: vector", "apply -f cr-lokistack.yaml", "oc get packagemanifests -n openshift-marketplace", "NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m", "oc describe packagemanifests <operator_name> -n openshift-marketplace", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>", "oc apply -f operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar", "oc apply -f sub.yaml", "oc get subscription jaeger -n openshift-operators -o yaml | grep currentCSV", "currentCSV: jaeger-operator.v1.8.2", "oc delete subscription jaeger -n openshift-operators", "subscription.operators.coreos.com \"jaeger\" deleted", "oc delete clusterserviceversion jaeger-operator.v1.8.2 -n openshift-operators", "clusterserviceversion.operators.coreos.com \"jaeger-operator.v1.8.2\" deleted" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/logging/logging-5-6
Chapter 6. Monitoring and tuning Data Grid queries
Chapter 6. Monitoring and tuning Data Grid queries Data Grid exposes statistics for queries and provides attributes that you can adjust to improve query performance. 6.1. Getting query statistics Collect statistics to gather information about performance of your indexes and queries, including information such as the types of indexes, average time for queries to complete and the number of possible failures on indexing operations. Procedure Do one of the following: Invoke the getSearchStatistics() or getClusteredSearchStatistics() methods for embedded caches. Use GET requests to obtain statistics for remote caches from the REST API. Embedded caches // Statistics for the local cluster member SearchStatistics statistics = Search.getSearchStatistics(cache); // Consolidated statistics for the whole cluster CompletionStage<SearchStatisticsSnapshot> statistics = Search.getClusteredSearchStatistics(cache) Remote caches 6.2. Tuning query performance Use the following guidelines to help you improve the performance of indexing operations and queries. Checking index usage statistics Queries against partially indexed caches return slower results. For instance, if some fields in a schema are not annotated then the resulting index does not include those fields. Start tuning query performance by checking the time it takes for each type of query to run. If your queries seem to be slow, you should make sure that queries are using the indexes for caches and that all entities and field mappings are indexed. Adjusting the commit interval for indexes Indexing can degrade write throughput for Data Grid clusters. The commit-interval attribute defines the interval, in milliseconds, between which index changes that are buffered in memory are flushed to the index storage and a commit is performed. This operation is costly so you should avoid configuring an interval that is too small. The default is 1000 ms (1 second). Adjusting the refresh interval for queries The refresh-interval attribute defines the interval, in milliseconds, between which the index reader is refreshed. The default value is 0 , which returns data in queries as soon as it is written to a cache. A value greater than 0 results in some stale query results but substantially increases throughput, especially in write-heavy scenarios. If you do not need data returned in queries as soon as it is written, you should adjust the refresh interval to improve query performance.
[ "// Statistics for the local cluster member SearchStatistics statistics = Search.getSearchStatistics(cache); // Consolidated statistics for the whole cluster CompletionStage<SearchStatisticsSnapshot> statistics = Search.getClusteredSearchStatistics(cache)", "GET /rest/v2/caches/{cacheName}/search/stats" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/querying_data_grid_caches/query-monitoring-tuning
Chapter 5. Configuring secure connections
Chapter 5. Configuring secure connections Securing the connection between a Kafka cluster and a client application helps to ensure the confidentiality, integrity, and authenticity of the communication between the cluster and the client. To achieve a secure connection, you can introduce configuration related to authentication, encryption, and authorization: Authentication Use an authentication mechanism to verify the identity of a client application. Encryption Enable encryption of data in transit between the client and broker using SSL/TLS encryption. Authorization Control client access and operations allowed on Kafka brokers based on the authenticated identity of a client application. Authorization cannot be used without authentication. If authentication is not enabled, it's not possible to determine the identity of clients, and therefore, it's not possible to enforce authorization rules. This means that even if authorization rules are defined, they will not be enforced without authentication. In Streams for Apache Kafka, listeners are used to configure the network connections between the Kafka brokers and the clients. Listener configuration options determine how the brokers listen for incoming client connections and how secure access is managed. The exact configuration required depends on the authentication, encryption, and authorization mechanisms you have chosen. You configure your Kafka brokers and client applications to enable security features. The general outline to secure a client connection to a Kafka cluster is as follows: Install the Streams for Apache Kafka components, including the Kafka cluster. For TLS, generate TLS certificates for each broker and client application. Configure listeners in the broker configuration for secure connection. Configure the client application for secure connection. Configure your client application according to the mechanisms you are using to establish a secure and authenticated connection with the Kafka brokers. The authentication, encryption, and authorization used by a Kafka broker must match those used by a connecting client application. The client application and broker need to agree on the security protocols and configurations for secure communication to take place. For example, a Kafka client and the Kafka broker must use the same TLS versions and cipher suites. Note Mismatched security configurations between the client and broker can result in connection failures or potential security vulnerabilities. It's important to carefully configure and test both the broker and client application to ensure they are properly secured and able to communicate securely. 5.1. Setting up brokers for secure access Before you can configure client applications for secure access, you must first set up the brokers in your Kafka cluster to support the security mechanisms you want to use. To enable secure connections, you create listeners with the appropriate configuration for the security mechanisms. 5.1.1. Establishing a secure connection to a Kafka cluster running on RHEL When using Streams for Apache Kafka on RHEL, the general outline to secure a client connection to a Kafka cluster is as follows: Install the Streams for Apache Kafka components, including the Kafka cluster, on the RHEL server. For TLS, generate TLS certificates for all brokers in the Kafka cluster. Configure listeners in the broker configuration properties file. Configure authentication for your Kafka cluster listeners, such as TLS or SASL SCRAM-SHA-512. Configure authorization for all enabled listeners on the Kafka cluster, such as simple authorization. For TLS, generate TLS certificates for each client application. Create a config.properties file to specify the connection details and authentication credentials used by the client application. Start the Kafka client application and connect to the Kafka cluster. Use the properties defined in the config.properties file to connect to the Kafka broker. Verify that the client can successfully connect to the Kafka cluster and consume and produce messages securely. Additional resources For more information on setting up your brokers, see the following guides: Using Streams for Apache Kafka on RHEL in KRaft mode Using Streams for Apache Kafka on RHEL with ZooKeeper . 5.1.2. Configuring secure listeners for a Kafka cluster on RHEL Use a configuration properties file to configure listeners in Kafka. To configure a secure connection for Kafka brokers, you set the relevant properties for TLS, SASL, and other security-related configurations in this file. Here is an example configuration of a TLS listener specified in a server.properties configuration file for a Kafka broker, with a keystore and truststore in PKCS#12 format: Example listener configuration in server.properties listeners = listener_1://0.0.0.0:9093, listener_2://0.0.0.0:9094 listener.security.protocol.map = listener_1:SSL, listener_2:PLAINTEXT ssl.keystore.type = PKCS12 ssl.keystore.location = /path/to/keystore.p12 ssl.keystore.password = <password> ssl.truststore.type = PKCS12 ssl.truststore.location = /path/to/truststore.p12 ssl.truststore.password = <password> ssl.client.auth = required authorizer.class.name = kafka.security.auth.SimpleAclAuthorizer. super.users = User:superuser The listeners property specifies each listener name, and the IP address and port that the broker listens on. The protocol map tells the listener_1 listener to use the SSL protocol for clients that use TLS encryption. listener_2 provides PLAINTEXT connections for clients that do not use TLS encryption. The keystore contains the broker's private key and certificate. The truststore contains the trusted certificates used to verify the identity of the client application. The ssl.client.auth property enforces client authentication. The Kafka cluster uses simple authorization. The authorizer is set to SimpleAclAuthorizer . A single super user is defined for unconstrained access on all listeners. Streams for Apache Kafka supports the Kafka SimpleAclAuthorizer and custom authorizer plugins. If we prefix the configuration properties with listener.name.<name_of_listener> , the configuration is specific to that listener. This is just a sample configuration. Some configuration options are specific to the type of listener. If you are using OAuth 2.0 or Open Policy Agent (OPA), you must also configure access to the authorization server or OPA server in a specific listener. You can create listeners based on your specific requirements and environment. For more information on listener configuration, see the Apache Kafka documentation . Using ACLs to fine-tune access You can use Access Control Lists (ACLs) to fine-tune access to the Kafka cluster. To create and manage Access Control Lists (ACLs), use the kafka-acls.sh command line tool. The ACLs apply access rules to client applications. In the following example, the first ACL grants read and describe permissions for a specific topic named my-topic . The resource.patternType is set to literal , which means that the resource name must match exactly. The second ACL grants read permissions for a specific consumer group named my-group . The resource.patternType is set to prefix , which means that the resource name must match the prefix. Example ACL configuration bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add \ --allow-principal User:my-user --operation Read --operation Describe --topic my-topic --resource-pattern-type literal \ --allow-principal User:my-user --operation Read --group my-group --resource-pattern-type prefixed 5.1.3. Establishing a secure connection to a Kafka cluster running on OpenShift When using Streams for Apache Kafka on OpenShift, the general outline to secure a client connection to a Kafka cluster is as follows: Use the Cluster Operator to deploy a Kafka cluster in your OpenShift environment. Use the Kafka custom resource to configure and install the cluster and create listeners. Configure authentication for the listeners, such as TLS or SASL SCRAM-SHA-512. The Cluster Operator creates a secret that contains a cluster CA certificate to verify the identity of the Kafka brokers. Configure authorization for all enabled listeners, such as simple authorization. Use the User Operator to create a Kafka user representing your client. Use the KafkaUser custom resource to configure and create the user. Configure authentication for your Kafka user (client) that matches the authentication mechanism of a listener. The User Operator creates a secret that contains a client certificate and private key for the client to use for authentication with the Kafka cluster. Configure authorization for your Kafka user (client) that matches the authorization mechanism of the listener. Authorization rules allow specific operations on the Kafka cluster. Create a config.properties file to specify the connection details and authentication credentials required by the client application to connect to the cluster. Start the Kafka client application and connect to the Kafka cluster. Use the properties defined in the config.properties file to connect to the Kafka broker. Verify that the client can successfully connect to the Kafka cluster and consume and produce messages securely. Additional resources For more information on setting up your brokers, see Configuring Streams for Apache Kafka on OpenShift . 5.1.4. Configuring secure listeners for a Kafka cluster on OpenShift When you deploy a Kafka custom resource with Streams for Apache Kafka, you add listener configuration to the Kafka spec . Use the listener configuration to secure connections in Kafka. To configure a secure connection for Kafka brokers, set the relevant properties for TLS, SASL, and other security-related configurations at the listener level. External listeners provide client access to a Kafka cluster from outside the OpenShift cluster. Streams for Apache Kafka creates listener services and bootstrap addresses to enable access to the Kafka cluster based on the configuration. For example, you can create external listeners that use the following connection mechanisms: Node ports loadbalancers Openshift routes Here is an example configuration of a nodeport listener for a Kafka resource: Example listener configuration in the Kafka resource apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... listeners: - name: plaintext port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external port: 9094 type: route tls: true authentication: type: tls authorization: type: simple superUsers: - CN=superuser # ... The listeners property is configured with three listeners: plaintext , tls , and external . The external listener is of type nodeport , and it uses TLS for both encryption and authentication. When you create the Kafka cluster with the Cluster Operator, CA certificates are automatically generated. You add cluster CA to the truststore of your client application to verify the identity of the Kafka brokers. Alternatively, you can configure Streams for Apache Kafka to use your own certificates at the broker or listener level. Using certificates at the listener level might be required when client applications require different security configurations. Using certificates at the listener level also adds an additional layer of control and security. Tip Use configuration provider plugins to load configuration data to producer and consumer clients. The configuration Provider plugin loads configuration data from secrets or ConfigMaps. For example, you can tell the provider to automatically get certificates from Strimzi secrets. For more information, see the Streams for Apache Kafka documentation for running onOpenShift. The Kafka cluster uses simple authorization. The authorization property type is set to simple . A single super user is defined for unconstrained access on all listeners. Streams for Apache Kafka supports the Kafka SimpleAclAuthorizer and custom authorizer plugins. This is just a sample configuration. Some configuration options are specific to the type of listener. If you are using OAuth 2.0 or Open Policy Agent (OPA), you must also configure access to the authorization server or OPA server in a specific listener. You can create listeners based on your specific requirements and environment. For more information on listener configuration, see the GenericKafkaListener schema reference . Note When using a route type listener for client access to a Kafka cluster on OpenShift, the TLS passthrough feature is enabled. An OpenShift route is designed to work with the HTTP protocol, but it can also be used to proxy network traffic for other protocols, including the Kafka protocol used by Apache Kafka. The client establishes a connection to the route, and the route forwards the traffic to the broker running in the OpenShift cluster using the TLS Server Name Indication (SNI) extension to get the target hostname. The SNI extension allows the route to correctly identify the target broker for each connection. Using ACLs to fine-tune access You can use Access Control Lists (ACLs) to fine-tune access to the Kafka cluster. To add Access Control Lists (ACLs), you configure the KafkaUser custom resource. When you create a KafkaUser , Streams for Apache Kafka automatically manages the creation and updates the ACLs. The ACLs apply access rules to client applications. In the following example, the first ACL grants read and describe permissions for a specific topic named my-topic . The resource.patternType is set to literal , which means that the resource name must match exactly. The second ACL grants read permissions for a specific consumer group named my-group . The resource.patternType is set to prefix , which means that the resource name must match the prefix. Example ACL configuration in the KafkaUser resource apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # ... authorization: type: simple acls: - resource: type: topic name: my-topic patternType: literal operations: - Read - Describe - resource: type: group name: my-group patternType: prefix operations: - Read Note If you specify tls-external as an authentication option when configuring the Kafka user, you can use your own client certificates rather than those generated by the User Operator. 5.2. Setting up clients for secure access After you have set up listeners on your Kafka brokers to support secure connections, the step is to configure your client applications to use these listeners to communicate with the Kafka cluster. This involves providing the appropriate security settings for each client to authenticate with the cluster based on the security mechanisms configured on the listener. 5.2.1. Configuring security protocols Configure the security protocol used by your client application to match the protocol configured on a Kafka broker listener. For example, use SSL (Secure Sockets Layer) for TLS authentication or SASL_SSL for SASL (Simple Authentication and Security Layer over SSL) authentication with TLS encryption. Add a truststore and keystore to your client configuration that supports the authentication mechanism required to access the Kafka cluster. Truststore The truststore contains the public certificates of the trusted certificate authority (CA) that are used to verify the authenticity of a Kafka broker. When the client connects to a secure Kafka broker, it might need to verify the identity of the broker. Keystore The keystore contains the client's private key and its public certificate. When the client wants to authenticate itself to the broker, it presents its own certificate. If you are using TLS authentication, your Kafka client configuration requires a truststore and keystore to connect to a Kafka cluster. If you are using SASL SCRAM-SHA-512, authentication is performed through the exchange of username and password credentials, rather than digital certificates, so a keystore is not required. SCRAM-SHA-512 is a more lightweight mechanism, but it is not as secure as using certificate-based authentication. Note If you have your own certificate infrastructure in place and use certificates from a third-party CA, then the client's default truststore will likely already contain the public CA certificates and you do not need to add them to the client's truststore. The client automatically trusts the server's certificate if it is signed by one of the public CA certificates that is already included in the default truststore. You can create a config.properties file to specify the authentication credentials used by the client application. In the following example, the security.protocol is set to SSL to enable TLS authentication and encryption between the client and broker. The ssl.truststore.location and ssl.truststore.password properties specify the location and password of the truststore. The ssl.keystore.location and ssl.keystore.password properties specify the location and password of the keystore. The PKCS #12 (Public-Key Cryptography Standards #12) file format is used. You can also use the base64-encoded PEM (Privacy Enhanced Mail) format. Example client configuration properties for TLS authentication bootstrap.servers = my-cluster-kafka-bootstrap:9093 security.protocol = SSL ssl.truststore.location = /path/to/ca.p12 ssl.truststore.password = truststore-password ssl.keystore.location = /path/to/user.p12 ssl.keystore.password = keystore-password client.id = my-client In the following example, the security.protocol is set to SASL_SSL to enable SASL authentication with TLS encryption between the client and broker. If you only need authentication and not encryption, you can use the SASL protocol. The specified SASL mechanism for authentication is SCRAM-SHA-512 . Different authentication mechanisms can be used. sasl.jaas.config properties specify the authentication credentials. Example client configuration properties for SCRAM-SHA-512 authentication bootstrap.servers = my-cluster-kafka-bootstrap:9093 security.protocol = SASL_SSL sasl.mechanism = SCRAM-SHA-512 sasl.jaas.config = org.apache.kafka.common.security.scram.ScramLoginModule required \ username = "user" \ password = "secret"; ssl.truststore.location = path/to/truststore.p12 ssl.truststore.password = truststore_password ssl.truststore.type = PKCS12 client.id = my-client Note For applications that do not support PEM format, you can use a tool like OpenSSL to convert PEM files to PKCS #12 format. 5.2.2. Configuring permitted TLS versions and cipher suites You can incorporate SSL configuration and cipher suites to further secure TLS-based communication between your client application and a Kafka cluster. Specify the supported TLS versions and cipher suites in the configuration for the Kafka broker. You can also add the configuration to your clients if you wish to limit the TLS versions and cipher suites they use. The configuration on the client should only use protocols and cipher suites that are enabled on the brokers. In the following example, SSL is enabled using security.protocol for communication between Kafka brokers and client applications. You specify cipher suites as a comma-separated list. The ssl.cipher.suites property is a comma-separated list of cipher suites that the client is allowed to use. Example SSL configuration properties for Kafka brokers security.protocol: "SSL" ssl.enabled.protocols: "TLSv1.3", "TLSv1.2" ssl.protocol: "TLSv1.3" ssl.cipher.suites: "TLS_AES_256_GCM_SHA384" The ssl.enabled.protocols property specifies the available TLS versions that can be used for secure communication between the cluster and its clients. In this case, both TLSv1.3 and TLSv1.2 are enabled. The ssl.protocol property sets the default TLS version for all connections, and it must be chosen from the enabled protocols. By default, clients communicate using TLSv1.3 . If a client only supports TLSv1.2, it can still connect to the broker and communicate using that supported version. Similarly, if the configuration is on the client and the broker only supports TLSv1.2, the client uses the supported version. The cipher suites supported by Apache Kafka depend on the version of Kafka you are using and the underlying environment. Check for the latest supported cipher suites that provide the highest level of security. 5.2.3. Using Access Control Lists (ACLs) You do not have to configure anything explicitly for ACLS in your client application. The ACLs are enforced on the server side by the Kafka broker. When the client sends a request to the server to produce or consume data, the server checks the ACLs to determine if the client (user) is authorized to perform the requested operation. If the client is authorized, the request is processed; otherwise, the request is denied and an error is returned. However, the client must still be authenticated and using the appropriate security protocol to enable a secure connection with the Kafka cluster. If you are using Access Control Lists (ACLs) on your Kafka brokers, make sure that ACLs are properly set up to restrict client access to the topics and operations that you want to control. If you are using Open Policy Agent (OPA) policies to manage access, authorization rules are configured in the policies, so you won't need specify ACLs against the Kafka brokers. OAuth 2.0 gives some flexibility: you can use the OAuth 2.0 provider to manage ACLs; or use OAuth 2.0 and Kafka's simple authorization to manage the ACLs. Note ACLs apply to most types of requests and are not limited to produce and consume operations. For example, ACLS can be applied to read operations like describing topics or write operations like creating new topics. 5.2.4. Using OAuth 2.0 for token-based access Use the OAuth 2.0 open standard for authorization with Streams for Apache Kafka to enforce authorization controls through an OAuth 2.0 provider. OAuth 2.0 provides a secure way for applications to access user data stored in other systems. An authorization server can issue access tokens to client applications that grant access to a Kafka cluster. The following steps describe the general approach to set up and use OAuth 2.0 for token validation: Configure the authorization server with broker and client credentials, such as a client ID and secret. Obtain the OAuth 2.0 credentials from the authorization server. Configure listeners on the Kafka brokers with OAuth 2.0 credentials and to interact with the authorization server. Add the Oauth 2.0 dependency to the client library. Configure your Kafka client with OAuth 2.0 credentials and to interact with the authorization server.. Obtain an access token at runtime, which authenticates the client with the OAuth 2.0 provider. If you have a listener configured for OAuth 2.0 on your Kafka broker, you can set up your client application to use OAuth 2.0. In addition to the standard Kafka client configurations to access the Kafka cluster, you must include specific configurations for OAuth 2.0 authentication. You must also make sure that the authorization server you are using is accessible by the Kafka cluster and client application. Specify a SASL (Simple Authentication and Security Layer) security protocol and mechanism. In a production environment, the following settings are recommended: The SASL_SSL protocol for TLS encrypted connections. The OAUTHBEARER mechanism for credentials exchange using a bearer token A JAAS (Java Authentication and Authorization Service) module implements the SASL mechanism. The configuration for the mechanism depends on the authentication method you are using. For example, using credentials exchange you add an OAuth 2.0 access token endpoint, access token, client ID, and client secret. A client connects to the token endpoint (URL) of the authorization server to check if a token is still valid. You also need a truststore that contains the public key certificate of the authorization server for authenticated access. Example client configuration properties for OAauth 2.0 bootstrap.servers = my-cluster-kafka-bootstrap:9093 security.protocol = SASL_SSL sasl.mechanism = OAUTHBEARER # ... sasl.jaas.config = org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.token.endpoint.uri = "https://localhost:9443/oauth2/token" \ oauth.access.token = <access_token> \ oauth.client.id = "<client_id>" \ oauth.client.secret = "<client_secret>" \ oauth.ssl.truststore.location = "/<truststore_location>/oauth-truststore.p12" \ oauth.ssl.truststore.password = "<truststore_password>" \ oauth.ssl.truststore.type = "PKCS12" \ Additional resources For more information on setting up your brokers to use OAuth 2.0, see the following guides: Deploying and Upgrading Streams for Apache Kafka on OpenShift Using Streams for Apache Kafka on RHEL in KRaft mode Using Streams for Apache Kafka on RHEL with ZooKeeper 5.2.5. Using Open Policy Agent (OPA) access policies Use the Open Policy Agent (OPA) policy agent with Streams for Apache Kafka to evaluate requests to connect to your Kafka cluster against access policies. Open Policy Agent (OPA) is a policy engine that manages authorization policies. Policies centralize access control, and can be updated dynamically, without requiring changes to the client application. For example, you can create a policy that allows only certain users (clients) to produce and consume messages to a specific topic. Streams for Apache Kafka uses the Open Policy Agent plugin for Kafka authorization as the authorizer. The following steps describe the general approach to set up and use OPA: Set up an instance of the OPA server. Define policies that provide the authorization rules that govern access to the Kafka cluster. Create configuration for the Kafka brokers to accept OPA authorization and interact with the OPA server. Configure your Kafka client to provide the credentials for authorized access to the Kafka cluster. If you have a listener configured for OPA on your Kafka broker, you can set up your client application to use OPA. In the listener configuration, you specify a URL to connect to the OPA server and authorize your client application. In addition to the standard Kafka client configurations to access the Kafka cluster, you must add the credentials to authenticate with the Kafka broker. The broker checks if the client has the necessary authorization to perform a requested operation, by sending a request to the OPA server to evaluate the authorization policy. You don't need a truststore or keystore to secure communication as the policy engine enforces authorization policies. Example client configuration properties for OPA authorization bootstrap.servers = my-cluster-kafka-bootstrap:9093 security.protocol = SASL_SSL sasl.mechanism = SCRAM-SHA-512 sasl.jaas.config = org.apache.kafka.common.security.scram.ScramLoginModule required \ username = "user" \ password = "secret"; # ... Note Red Hat does not support the OPA server. Additional resources For more information on setting up your brokers to use OPA, see the following guides: Deploying and Upgrading Streams for Apache Kafka on OpenShift Using Streams for Apache Kafka on RHEL in KRaft mode Using Streams for Apache Kafka on RHEL with ZooKeeper 5.2.6. Using transactions when streaming messages By configuring transaction properties in your brokers and producer client application, you can ensure that messages are processed in a single transaction. Transactions add reliability and consistency to the streaming of messages. Transactions are always enabled on brokers. You can change the default configuration using the following properties: Example Kafka broker configuration properties for transactions transaction.state.log.replication.factor = 3 transaction.state.log.min.isr = 2 transaction.abort.timed.out.transaction.cleanup.interval.ms = 3600000 This is a typical configuration for a production environment, which creates 3 replicas for the internal __transaction_state topic. The \__transaction_state topic stores information about the transactions in progress. A minimum of 2 in-sync replicas are required for the transaction logs. The cleanup interval is the time between checks for timed-out transactions and a clean up the corresponding transaction logs. To add transaction properties to a client configuration, you set the following properties for producers and consumers. Example producer client configuration properties for transactions transactional.id = unique-transactional-id enable.idempotence = true max.in.flight.requests.per.connection = 5 acks = all retries=2147483647 transaction.timeout.ms = 30000 delivery.timeout = 25000 The transactional ID allows the Kafka broker to keep track of the transactions. It is a unique identifier for the producer and should be used with a specific set of partitions. If you need to perform transactions for multiple sets of partitions, you need to use a different transactional ID for each set. Idempotence is enabled to avoid the producer instance creating duplicate messages. With idempotence, messages are tracked using a producer ID and sequence number. When the broker receives the message, it checks the producer ID and sequence number. If a message with the same producer ID and sequence number has already been received, the broker discards the duplicate message. The maximum number of in-flight requests is set to 5 so that transactions are processed in the order they are sent. A partition can have up to 5 in-flight requests without compromising the ordering of messages. By setting acks to all , the producer waits for acknowledgments from all in-sync replicas of the topic partitions to which it is writing before considering the transaction as complete. This ensures that the messages are durably written (committed) to the Kafka cluster, and that they will not be lost even in the event of a broker failure. The transaction timeout specifies the maximum amount of time the client has to complete a transaction before it times out. The delivery timeout specifies the maximum amount of time the producer waits for a broker acknowledgement of message delivery before it times out. To ensure that messages are delivered within the transaction period, set the delivery timeout to be less than the transaction timeout. Consider network latency and message throughput, and allow for temporary failures, when specifying retries for the number of attempts to resend a failed message request. Example consumer client configuration properties for transactions group.id = my-group-id isolation.level = read_committed enable.auto.commit = false The read_committed isolation level specifies that the consumer only reads messages for a transaction that has completed successfully. The consumer does not process any messages that are part of an ongoing or failed transaction. This ensures that the consumer only reads messages that are part of a fully complete transaction. When using transactions to stream messages, it is important to set enable.auto.commit to false . If set to true , the consumer periodically commits offsets without consideration to transactions. This means that the consumer may commit messages before a transaction has fully completed. By setting enable.auto.commit to false , the consumer only reads and commits messages that have been fully written and committed to the topic as part of a transaction.
[ "listeners = listener_1://0.0.0.0:9093, listener_2://0.0.0.0:9094 listener.security.protocol.map = listener_1:SSL, listener_2:PLAINTEXT ssl.keystore.type = PKCS12 ssl.keystore.location = /path/to/keystore.p12 ssl.keystore.password = <password> ssl.truststore.type = PKCS12 ssl.truststore.location = /path/to/truststore.p12 ssl.truststore.password = <password> ssl.client.auth = required authorizer.class.name = kafka.security.auth.SimpleAclAuthorizer. super.users = User:superuser", "bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:my-user --operation Read --operation Describe --topic my-topic --resource-pattern-type literal --allow-principal User:my-user --operation Read --group my-group --resource-pattern-type prefixed", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # listeners: - name: plaintext port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external port: 9094 type: route tls: true authentication: type: tls authorization: type: simple superUsers: - CN=superuser #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # authorization: type: simple acls: - resource: type: topic name: my-topic patternType: literal operations: - Read - Describe - resource: type: group name: my-group patternType: prefix operations: - Read", "bootstrap.servers = my-cluster-kafka-bootstrap:9093 security.protocol = SSL ssl.truststore.location = /path/to/ca.p12 ssl.truststore.password = truststore-password ssl.keystore.location = /path/to/user.p12 ssl.keystore.password = keystore-password client.id = my-client", "bootstrap.servers = my-cluster-kafka-bootstrap:9093 security.protocol = SASL_SSL sasl.mechanism = SCRAM-SHA-512 sasl.jaas.config = org.apache.kafka.common.security.scram.ScramLoginModule required username = \"user\" password = \"secret\"; ssl.truststore.location = path/to/truststore.p12 ssl.truststore.password = truststore_password ssl.truststore.type = PKCS12 client.id = my-client", "security.protocol: \"SSL\" ssl.enabled.protocols: \"TLSv1.3\", \"TLSv1.2\" ssl.protocol: \"TLSv1.3\" ssl.cipher.suites: \"TLS_AES_256_GCM_SHA384\"", "bootstrap.servers = my-cluster-kafka-bootstrap:9093 security.protocol = SASL_SSL sasl.mechanism = OAUTHBEARER sasl.jaas.config = org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri = \"https://localhost:9443/oauth2/token\" oauth.access.token = <access_token> oauth.client.id = \"<client_id>\" oauth.client.secret = \"<client_secret>\" oauth.ssl.truststore.location = \"/<truststore_location>/oauth-truststore.p12\" oauth.ssl.truststore.password = \"<truststore_password>\" oauth.ssl.truststore.type = \"PKCS12\" \\", "bootstrap.servers = my-cluster-kafka-bootstrap:9093 security.protocol = SASL_SSL sasl.mechanism = SCRAM-SHA-512 sasl.jaas.config = org.apache.kafka.common.security.scram.ScramLoginModule required username = \"user\" password = \"secret\";", "transaction.state.log.replication.factor = 3 transaction.state.log.min.isr = 2 transaction.abort.timed.out.transaction.cleanup.interval.ms = 3600000", "transactional.id = unique-transactional-id enable.idempotence = true max.in.flight.requests.per.connection = 5 acks = all retries=2147483647 transaction.timeout.ms = 30000 delivery.timeout = 25000", "group.id = my-group-id isolation.level = read_committed enable.auto.commit = false" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/developing_kafka_client_applications/assembly-kafka-secure-config-str
39.3. Migrating an LDAP Server to Identity Management
39.3. Migrating an LDAP Server to Identity Management Important This is a general migration procedure, but it may not work in every environment. It is strongly recommended that you set up a test LDAP environment and test the migration process before attempting to migrate the real LDAP environment. To verify that the migration has been completed correctly: Create a test user on IdM using the ipa user-add command and compare the output of migrated users to the test user. Make sure that the migrated users contain the minimal set of attributes and object classes present on the test user. Compare the output of migrated users (seen on IdM) to the source users (seen on the original LDAP server). Make sure that imported attributes are not doubled and do have expected values. Install the IdM server, including any custom LDAP directory schema, on a different machine from the existing LDAP directory. Note Custom user or group schemas have limited support in IdM. They can cause problems during the migration because of incompatible object definitions. Disable the compat plug-in. This step is not necessary if the data provided by the compat tree is required during the migration. Restart the IdM Directory Server instance. Configure the IdM server to allow migration: Run the IdM migration script, ipa migrate-ds . At its most basic, this requires only the LDAP URL of the LDAP directory instance to migrate: Simply passing the LDAP URL migrates all of the directory data using common default settings. The user and group data can be selectively migrated by specifying other options, as covered in Section 39.2, "Examples for Using ipa migrate-ds " . If the compat plug-in was not disabled in the step, pass the --with-compat option to ipa migrate-ds . Once the information is exported, the script adds all required IdM object classes and attributes and converts DNs in attributes to match the IdM directory tree, if the naming context differs. For example: uid= user ,ou=people,dc=ldap,dc=example,dc=com is migrated to uid= user ,ou=people,dc=idm,dc=example,dc=com . Re-enable the compat plug-in, if it was disabled before the migration. Restart the IdM Directory Server instance. Disable the migration mode: Optional. Reconfigure non-SSSD clients to use Kerberos authentication ( pam_krb5 ) instead of LDAP authentication ( pam_ldap ). Use PAM_LDAP modules until all of the users have been migrated; then it is possible to use PAM_KRB5. For further information, see Configuring a Kerberos Client in the System-Level Authentication Guide . There are two ways for users to generate their hashed Kerberos password. Both migrate the users password without additional user interaction, as described in Section 39.1.2, "Planning Password Migration" . Using SSSD: Move clients that have SSSD installed from the LDAP back end to the IdM back end, and enroll them as clients with IdM. This downloads the required keys and certificates. On Red Hat Enterprise Linux clients, this can be done using the ipa-client-install command. For example: Using the IdM migration web page: Instruct users to log into IdM using the migration web page: To monitor the user migration process, query the existing LDAP directory to see which user accounts have a password but do not yet have a Kerberos principal key. Note Include the single quotes around the filter so that it is not interpreted by the shell. When the migration of all clients and users is complete, decommission the LDAP directory.
[ "ipa user-add TEST_USER", "ipa user-show --all TEST_USER", "ipa-compat-manage disable", "systemctl restart dirsrv.target", "ipa config-mod --enable-migration=TRUE", "ipa migrate-ds ldap://ldap.example.com:389", "ipa-compat-manage enable", "systemctl restart dirsrv.target", "ipa config-mod --enable-migration=FALSE", "ipa-client-install --enable-dns-update", "https:// ipaserver.example.com /ipa/migration", "[user@server ~]USD ldapsearch -LL -x -D 'cn=Directory Manager' -w secret -b 'cn=users,cn=accounts,dc=example,dc=com' '(&(!(krbprincipalkey=*))(userpassword=*))' uid" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/mig-ldap-to-idm
20.17. Sound Devices
20.17. Sound Devices A virtual sound card can be attached to the host physical machine by using the sound element. ... <devices> <sound model='es1370'/> </devices> ... Figure 20.64. Virtual sound card The sound element has one mandatory attribute, model , which specifies what real sound device is emulated. Valid values are specific to the underlying hypervisor, though typical choices are 'es1370' , 'sb16' , 'ac97' , and 'ich6' . In addition, a sound element with ich6 model can have optional sub-elements codec to attach various audio codecs to the audio device. If not specified, a default codec will be attached to allow playback and recording. Valid values are 'duplex' (advertises a line-in and a line-out) and 'micro' (advertises a speaker and a microphone). ... <devices> <sound model='ich6'> <codec type='micro'/> <sound/> </devices> ... Figure 20.65. Sound devices Each sound element has an optional sub-element <address> which can tie the device to a particular PCI slot, documented above.
[ "<devices> <sound model='es1370'/> </devices>", "<devices> <sound model='ich6'> <codec type='micro'/> <sound/> </devices>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/section-libvirt-dom-xml-sound-devices
Appendix A. Revision History
Appendix A. Revision History Revision History Revision 1.1.0- Tue Aug 06 2019 Eliane Pereira Revision 1.0.0- Thu Jul 11 2019 Eliane Pereira Composer has been splitted to its own guide and it is now available. Revision 0.0-0 Sun Jun 2 2019 Eliane Pereira Preparing document for 7.7 GA publication.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/image_builder_guide/appe-publican-image_builder-revision_history
5.3. Formatting and Mounting Bricks
5.3. Formatting and Mounting Bricks To create a Red Hat Gluster Storage volume, specify the bricks that comprise the volume. After creating the volume, the volume must be started before it can be mounted. 5.3.1. Creating Bricks Manually Important Red Hat supports formatting a Logical Volume using the XFS file system on the bricks. Red Hat supports heterogeneous subvolume sizes for distributed volumes (either pure distributed, distributed-replicated or distributed-dispersed). Red Hat does not support heterogeneous brick sizes for bricks of the same subvolume. For example, you can have a distributed-replicated 3x3 volume with 3 bricks of 10GiB, 3 bricks of 50GiB and 3 bricks of 100GiB as long as the 3 10GiB bricks belong to the same replicate and similarly the 3 50GiB and 100GiB bricks belong to the same replicate set. In this way you will have 1 subvolume of 10GiB, another of 50GiB and 100GiB. The distributed hash table balances the number of assigned files to each subvolume so that the subvolumes get filled proportionally to their size. 5.3.1.1. Creating a Thinly Provisioned Logical Volume Create a physical volume(PV) by using the pvcreate command. For example: Here, /dev/sdb is a storage device. Use the correct dataalignment option based on your device. For more information, see Section 19.2, "Brick Configuration" Note The device name and the alignment value will vary based on the device you are using. Create a Volume Group (VG) from the PV using the vgcreate command: For example: Create a thin-pool using the following commands: For example: Ensure you read Chapter 19, Tuning for Performance to select appropriate values for chunksize and poolmetadatasize . Create a thinly provisioned volume that uses the previously created pool by running the lvcreate command with the --virtualsize and --thin options: For example: It is recommended that only one LV should be created in a thin pool. Format bricks using the supported XFS configuration, mount the bricks, and verify the bricks are mounted correctly. To enhance the performance of Red Hat Gluster Storage, ensure you read Chapter 19, Tuning for Performance before formatting the bricks. Important Snapshots are not supported on bricks formatted with external log devices. Do not use -l logdev=device option with mkfs.xfs command for formatting the Red Hat Gluster Storage bricks. DEVICE is the created thin LV. The inode size is set to 512 bytes to accommodate for the extended attributes used by Red Hat Gluster Storage. Run # mkdir / mountpoint to create a directory to link the brick to. Add an entry in /etc/fstab : For example: Run mount / mountpoint to mount the brick. Run the df -h command to verify the brick is successfully mounted: If SElinux is enabled, then the SELinux labels that has to be set manually for the bricks created using the following commands: 5.3.2. Using Subdirectory as the Brick for Volume You can create an XFS file system, mount them and point them as bricks while creating a Red Hat Gluster Storage volume. If the mount point is unavailable, the data is directly written to the root file system in the unmounted directory. For example, the /rhgs directory is the mounted file system and is used as the brick for volume creation. However, for some reason, if the mount point is unavailable, any write continues to happen in the /rhgs directory, but now this is under root file system. To overcome this issue, you can perform the below procedure. During Red Hat Gluster Storage setup, create an XFS file system and mount it. After mounting, create a subdirectory and use this subdirectory as the brick for volume creation. Here, the XFS file system is mounted as /bricks . After the file system is available, create a directory called /rhgs/brick1 and use it for volume creation. Ensure that no more than one brick is created from a single mount. This approach has the following advantages: When the /rhgs file system is unavailable, there is no longer /rhgs/brick1 directory available in the system. Hence, there will be no data loss by writing to a different location. This does not require any additional file system for nesting. Perform the following to use subdirectories as bricks for creating a volume: Create the brick1 subdirectory in the mounted file system. Repeat the above steps on all nodes. Create the Red Hat Gluster Storage volume using the subdirectories as bricks. Start the Red Hat Gluster Storage volume. Verify the status of the volume. Note If multiple bricks are used from the same server, then ensure the bricks are mounted in the following format. For example: Create a distribute volume with 2 bricks from each server. For example: 5.3.3. Reusing a Brick from a Deleted Volume Bricks can be reused from deleted volumes, however some steps are required to make the brick reusable. Brick with a File System Suitable for Reformatting (Optimal Method) Run # mkfs.xfs -f -i size=512 device to reformat the brick to supported requirements, and make it available for immediate reuse in a new volume. Note All data will be erased when the brick is reformatted. File System on a Parent of a Brick Directory If the file system cannot be reformatted, remove the whole brick directory and create it again. 5.3.4. Cleaning An Unusable Brick If the file system associated with the brick cannot be reformatted, and the brick directory cannot be removed, perform the following steps: Delete all previously existing data in the brick, including the .glusterfs subdirectory. Run # setfattr -x trusted.glusterfs.volume-id brick and # setfattr -x trusted.gfid brick to remove the attributes from the root of the brick. Run # getfattr -d -m . brick to examine the attributes set on the volume. Take note of the attributes. Run # setfattr -x attribute brick to remove the attributes relating to the glusterFS file system. The trusted.glusterfs.dht attribute for a distributed volume is one such example of attributes that need to be removed.
[ "pvcreate --dataalignment alignment_value device", "pvcreate --dataalignment 1280K /dev/sdb", "vgcreate --physicalextentsize alignment_value volgroup device", "vgcreate --physicalextentsize 1280K rhs_vg /dev/sdb", "lvcreate --thin volgroup / poolname --size pool_sz --chunksize chunk_sz --poolmetadatasize metadev_sz --zero n", "lvcreate --thin rhs_vg/rhs_pool --size 2T --chunksize 1280K --poolmetadatasize 16G --zero n", "lvcreate --virtualsize size --thin volgroup / poolname --name volname", "lvcreate --virtualsize 1G --thin rhs_vg/rhs_pool --name rhs_lv", "mkfs.xfs -f -i size=512 -n size=8192 -d su=128k,sw=10 device", "mkdir /rhgs", "/dev/volgroup/volname / mountpoint xfs rw,inode64,noatime,nouuid,x-systemd.device-timeout=10min 1 2", "/dev/rhs_vg/rhs_lv /rhgs xfs rw,inode64,noatime,nouuid,x-systemd.device-timeout=10min 1 2", "df -h /dev/rhs_vg/rhs_lv 16G 1.2G 15G 7% /rhgs", "semanage fcontext -a -t glusterd_brick_t /rhgs/brick1 restorecon -Rv /rhgs/brick1", "mkdir /rhgs/brick1", "gluster volume create distdata01 ad-rhs-srv1:/rhgs/brick1 ad-rhs-srv2:/rhgs/brick2", "gluster volume start distdata01", "gluster volume status distdata01", "df -h /dev/rhs_vg/rhs_lv1 16G 1.2G 15G 7% /rhgs1 /dev/rhs_vg/rhs_lv2 16G 1.2G 15G 7% /rhgs2", "gluster volume create test-volume server1:/rhgs1/brick1 server2:/rhgs1/brick1 server1:/rhgs2/brick2 server2:/rhgs2/brick2" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/Formatting_and_Mounting_Bricks
Chapter 4. Brokers
Chapter 4. Brokers 4.1. Brokers Brokers can be used in combination with triggers to deliver events from an event source to an event sink. Events are sent from an event source to a broker as an HTTP POST request. After events have entered the broker, they can be filtered by CloudEvent attributes using triggers, and sent as an HTTP POST request to an event sink. 4.2. Broker types Cluster administrators can set the default broker implementation for a cluster. When you create a broker, the default broker implementation is used, unless you provide set configurations in the Broker object. 4.2.1. Default broker implementation for development purposes Knative provides a default, channel-based broker implementation. This channel-based broker can be used for development and testing purposes, but does not provide adequate event delivery guarantees for production environments. The default broker is backed by the InMemoryChannel channel implementation by default. If you want to use Apache Kafka to reduce network hops, use the Knative broker implementation for Apache Kafka. Do not configure the channel-based broker to be backed by the KafkaChannel channel implementation. 4.2.2. Production-ready Knative broker implementation for Apache Kafka For production-ready Knative Eventing deployments, Red Hat recommends using the Knative broker implementation for Apache Kafka. The broker is an Apache Kafka native implementation of the Knative broker, which sends CloudEvents directly to the Kafka instance. The Knative broker has a native integration with Kafka for storing and routing events. This allows better integration with Kafka for the broker and trigger model over other broker types, and reduces network hops. Other benefits of the Knative broker implementation include: At-least-once delivery guarantees Ordered delivery of events, based on the CloudEvents partitioning extension Control plane high availability A horizontally scalable data plane The Knative broker implementation for Apache Kafka stores incoming CloudEvents as Kafka records, using the binary content mode. This means that all CloudEvent attributes and extensions are mapped as headers on the Kafka record, while the data spec of the CloudEvent corresponds to the value of the Kafka record. 4.3. Creating brokers Knative provides a default, channel-based broker implementation. This channel-based broker can be used for development and testing purposes, but does not provide adequate event delivery guarantees for production environments. If a cluster administrator has configured your OpenShift Serverless deployment to use Apache Kafka as the default broker type, creating a broker by using the default settings creates a Knative broker for Apache Kafka. If your OpenShift Serverless deployment is not configured to use the Knative broker for Apache Kafka as the default broker type, the channel-based broker is created when you use the default settings in the following procedures. 4.3.1. Creating a broker by using the Knative CLI Brokers can be used in combination with triggers to deliver events from an event source to an event sink. Using the Knative ( kn ) CLI to create brokers provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn broker create command to create a broker. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a broker: USD kn broker create <broker_name> Verification Use the kn command to list all existing brokers: USD kn broker list Example output NAME URL AGE CONDITIONS READY REASON default http://broker-ingress.knative-eventing.svc.cluster.local/test/default 45s 5 OK / 5 True Optional: If you are using the OpenShift Container Platform web console, you can navigate to the Topology view in the Developer perspective, and observe that the broker exists: 4.3.2. Creating a broker by annotating a trigger Brokers can be used in combination with triggers to deliver events from an event source to an event sink. You can create a broker by adding the eventing.knative.dev/injection: enabled annotation to a Trigger object. Important If you create a broker by using the eventing.knative.dev/injection: enabled annotation, you cannot delete this broker without cluster administrator permissions. If you delete the broker without having a cluster administrator remove this annotation first, the broker is created again after deletion. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a Trigger object as a YAML file that has the eventing.knative.dev/injection: enabled annotation: apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: annotations: eventing.knative.dev/injection: enabled name: <trigger_name> spec: broker: default subscriber: 1 ref: apiVersion: serving.knative.dev/v1 kind: Service name: <service_name> 1 Specify details about the event sink, or subscriber , that the trigger sends events to. Apply the Trigger YAML file: USD oc apply -f <filename> Verification You can verify that the broker has been created successfully by using the oc CLI, or by observing it in the Topology view in the web console. Enter the following oc command to get the broker: USD oc -n <namespace> get broker default Example output NAME READY REASON URL AGE default True http://broker-ingress.knative-eventing.svc.cluster.local/test/default 3m56s Optional: If you are using the OpenShift Container Platform web console, you can navigate to the Topology view in the Developer perspective, and observe that the broker exists: 4.3.3. Creating a broker by labeling a namespace Brokers can be used in combination with triggers to deliver events from an event source to an event sink. You can create the default broker automatically by labelling a namespace that you own or have write permissions for. Note Brokers created using this method are not removed if you remove the label. You must manually delete them. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have cluster or dedicated administrator permissions if you are using Red Hat OpenShift Service on AWS or OpenShift Dedicated. Procedure Label a namespace with eventing.knative.dev/injection=enabled : USD oc label namespace <namespace> eventing.knative.dev/injection=enabled Verification You can verify that the broker has been created successfully by using the oc CLI, or by observing it in the Topology view in the web console. Use the oc command to get the broker: USD oc -n <namespace> get broker <broker_name> Example command USD oc -n default get broker default Example output NAME READY REASON URL AGE default True http://broker-ingress.knative-eventing.svc.cluster.local/test/default 3m56s Optional: If you are using the OpenShift Container Platform web console, you can navigate to the Topology view in the Developer perspective, and observe that the broker exists: 4.3.4. Deleting a broker that was created by injection If you create a broker by injection and later want to delete it, you must delete it manually. Brokers created by using a namespace label or trigger annotation are not deleted permanently if you remove the label or annotation. Prerequisites Install the OpenShift CLI ( oc ). Procedure Remove the eventing.knative.dev/injection=enabled label from the namespace: USD oc label namespace <namespace> eventing.knative.dev/injection- Removing the annotation prevents Knative from recreating the broker after you delete it. Delete the broker from the selected namespace: USD oc -n <namespace> delete broker <broker_name> Verification Use the oc command to get the broker: USD oc -n <namespace> get broker <broker_name> Example command USD oc -n default get broker default Example output No resources found. Error from server (NotFound): brokers.eventing.knative.dev "default" not found 4.3.5. Creating a broker by using the web console After Knative Eventing is installed on your cluster, you can create a broker by using the web console. Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create a broker. Prerequisites You have logged in to the OpenShift Container Platform web console. The OpenShift Serverless Operator, Knative Serving and Knative Eventing are installed on the cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure In the Developer perspective, navigate to +Add Broker . The Broker page is displayed. Optional. Update the Name of the broker. If you do not update the name, the generated broker is named default . Click Create . Verification You can verify that the broker was created by viewing broker components in the Topology page. In the Developer perspective, navigate to Topology . View the mt-broker-ingress , mt-broker-filter , and mt-broker-controller components. 4.3.6. Creating a broker by using the Administrator perspective Brokers can be used in combination with triggers to deliver events from an event source to an event sink. Events are sent from an event source to a broker as an HTTP POST request. After events have entered the broker, they can be filtered by CloudEvent attributes using triggers, and sent as an HTTP POST request to an event sink. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console and are in the Administrator perspective. You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Serverless Eventing . In the Create list, select Broker . You will be directed to the Create Broker page. Optional: Modify the YAML configuration for the broker. Click Create . 4.3.7. steps Configure event delivery parameters that are applied in cases where an event fails to be delivered to an event sink. 4.3.8. Additional resources Configuring the default broker class Triggers Connect a broker to a sink using the Developer perspective 4.4. Configuring the default broker backing channel If you are using a channel-based broker, you can set the default backing channel type for the broker to either InMemoryChannel or KafkaChannel . Prerequisites You have administrator permissions on OpenShift Container Platform. You have installed the OpenShift Serverless Operator and Knative Eventing on your cluster. You have installed the OpenShift ( oc ) CLI. If you want to use Apache Kafka channels as the default backing channel type, you must also install the KnativeKafka CR on your cluster. Procedure Modify the KnativeEventing custom resource (CR) to add configuration details for the config-br-default-channel config map: apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: 1 config-br-default-channel: channel-template-spec: | apiVersion: messaging.knative.dev/v1beta1 kind: KafkaChannel 2 spec: numPartitions: 6 3 replicationFactor: 3 4 1 In spec.config , you can specify the config maps that you want to add modified configurations for. 2 The default backing channel type configuration. In this example, the default channel implementation for the cluster is KafkaChannel . 3 The number of partitions for the Kafka channel that backs the broker. 4 The replication factor for the Kafka channel that backs the broker. Apply the updated KnativeEventing CR: USD oc apply -f <filename> 4.5. Configuring the default broker class You can use the config-br-defaults config map to specify default broker class settings for Knative Eventing. You can specify the default broker class for the entire cluster or for one or more namespaces. Currently the MTChannelBasedBroker and Kafka broker types are supported. Prerequisites You have administrator permissions on OpenShift Container Platform. You have installed the OpenShift Serverless Operator and Knative Eventing on your cluster. If you want to use the Knative broker for Apache Kafka as the default broker implementation, you must also install the KnativeKafka CR on your cluster. Procedure Modify the KnativeEventing custom resource to add configuration details for the config-br-defaults config map: apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: defaultBrokerClass: Kafka 1 config: 2 config-br-defaults: 3 default-br-config: | clusterDefault: 4 brokerClass: Kafka apiVersion: v1 kind: ConfigMap name: kafka-broker-config 5 namespace: knative-eventing 6 namespaceDefaults: 7 my-namespace: brokerClass: MTChannelBasedBroker apiVersion: v1 kind: ConfigMap name: config-br-default-channel 8 namespace: knative-eventing 9 ... 1 The default broker class for Knative Eventing. 2 In spec.config , you can specify the config maps that you want to add modified configurations for. 3 The config-br-defaults config map specifies the default settings for any broker that does not specify spec.config settings or a broker class. 4 The cluster-wide default broker class configuration. In this example, the default broker class implementation for the cluster is Kafka . 5 The kafka-broker-config config map specifies default settings for the Kafka broker. See "Configuring Knative broker for Apache Kafka settings" in the "Additional resources" section. 6 The namespace where the kafka-broker-config config map exists. 7 The namespace-scoped default broker class configuration. In this example, the default broker class implementation for the my-namespace namespace is MTChannelBasedBroker . You can specify default broker class implementations for multiple namespaces. 8 The config-br-default-channel config map specifies the default backing channel for the broker. See "Configuring the default broker backing channel" in the "Additional resources" section. 9 The namespace where the config-br-default-channel config map exists. Important Configuring a namespace-specific default overrides any cluster-wide settings. 4.6. Knative broker implementation for Apache Kafka For production-ready Knative Eventing deployments, Red Hat recommends using the Knative broker implementation for Apache Kafka. The broker is an Apache Kafka native implementation of the Knative broker, which sends CloudEvents directly to the Kafka instance. The Knative broker has a native integration with Kafka for storing and routing events. This allows better integration with Kafka for the broker and trigger model over other broker types, and reduces network hops. Other benefits of the Knative broker implementation include: At-least-once delivery guarantees Ordered delivery of events, based on the CloudEvents partitioning extension Control plane high availability A horizontally scalable data plane The Knative broker implementation for Apache Kafka stores incoming CloudEvents as Kafka records, using the binary content mode. This means that all CloudEvent attributes and extensions are mapped as headers on the Kafka record, while the data spec of the CloudEvent corresponds to the value of the Kafka record. 4.6.1. Creating an Apache Kafka broker when it is not configured as the default broker type If your OpenShift Serverless deployment is not configured to use Kafka broker as the default broker type, you can use one of the following procedures to create a Kafka-based broker. 4.6.1.1. Creating an Apache Kafka broker by using YAML Creating Knative resources by using YAML files uses a declarative API, which enables you to describe applications declaratively and in a reproducible manner. To create a Kafka broker by using YAML, you must create a YAML file that defines a Broker object, then apply it by using the oc apply command. Prerequisites The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have installed the OpenShift CLI ( oc ). Procedure Create a Kafka-based broker as a YAML file: apiVersion: eventing.knative.dev/v1 kind: Broker metadata: annotations: eventing.knative.dev/broker.class: Kafka 1 name: example-kafka-broker spec: config: apiVersion: v1 kind: ConfigMap name: kafka-broker-config 2 namespace: knative-eventing 1 The broker class. If not specified, brokers use the default class as configured by cluster administrators. To use the Kafka broker, this value must be Kafka . 2 The default config map for Knative brokers for Apache Kafka. This config map is created when the Kafka broker functionality is enabled on the cluster by a cluster administrator. Apply the Kafka-based broker YAML file: USD oc apply -f <filename> 4.6.1.2. Creating an Apache Kafka broker that uses an externally managed Kafka topic If you want to use a Kafka broker without allowing it to create its own internal topic, you can use an externally managed Kafka topic instead. To do this, you must create a Kafka Broker object that uses the kafka.eventing.knative.dev/external.topic annotation. Prerequisites The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource are installed on your OpenShift Container Platform cluster. You have access to a Kafka instance such as Red Hat AMQ Streams , and have created a Kafka topic. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have installed the OpenShift CLI ( oc ). Procedure Create a Kafka-based broker as a YAML file: apiVersion: eventing.knative.dev/v1 kind: Broker metadata: annotations: eventing.knative.dev/broker.class: Kafka 1 kafka.eventing.knative.dev/external.topic: <topic_name> 2 ... 1 The broker class. If not specified, brokers use the default class as configured by cluster administrators. To use the Kafka broker, this value must be Kafka . 2 The name of the Kafka topic that you want to use. Apply the Kafka-based broker YAML file: USD oc apply -f <filename> 4.6.1.3. Knative Broker implementation for Apache Kafka with isolated data plane Important The Knative Broker implementation for Apache Kafka with isolated data plane is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The Knative Broker implementation for Apache Kafka has 2 planes: Control plane Consists of controllers that talk to the Kubernetes API, watch for custom objects, and manage the data plane. Data plane The collection of components that listen for incoming events, talk to Apache Kafka, and send events to the event sinks. The Knative Broker implementation for Apache Kafka data plane is where events flow. The implementation consists of kafka-broker-receiver and kafka-broker-dispatcher deployments. When you configure a Broker class of Kafka , the Knative Broker implementation for Apache Kafka uses a shared data plane. This means that the kafka-broker-receiver and kafka-broker-dispatcher deployments in the knative-eventing namespace are used for all Apache Kafka Brokers in the cluster. However, when you configure a Broker class of KafkaNamespaced , the Apache Kafka broker controller creates a new data plane for each namespace where a broker exists. This data plane is used by all KafkaNamespaced brokers in that namespace. This provides isolation between the data planes, so that the kafka-broker-receiver and kafka-broker-dispatcher deployments in the user namespace are only used for the broker in that namespace. Important As a consequence of having separate data planes, this security feature creates more deployments and uses more resources. Unless you have such isolation requirements, use a regular Broker with a class of Kafka . 4.6.1.4. Creating a Knative broker for Apache Kafka that uses an isolated data plane Important The Knative Broker implementation for Apache Kafka with isolated data plane is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To create a KafkaNamespaced broker, you must set the eventing.knative.dev/broker.class annotation to KafkaNamespaced . Prerequisites The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource are installed on your OpenShift Container Platform cluster. You have access to an Apache Kafka instance, such as Red Hat AMQ Streams , and have created a Kafka topic. You have created a project, or have access to a project, with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have installed the OpenShift CLI ( oc ). Procedure Create an Apache Kafka-based broker by using a YAML file: apiVersion: eventing.knative.dev/v1 kind: Broker metadata: annotations: eventing.knative.dev/broker.class: KafkaNamespaced 1 name: default namespace: my-namespace 2 spec: config: apiVersion: v1 kind: ConfigMap name: my-config 3 ... 1 To use the Apache Kafka broker with isolated data planes, the broker class value must be KafkaNamespaced . 2 3 The referenced ConfigMap object my-config must be in the same namespace as the Broker object, in this case my-namespace . Apply the Apache Kafka-based broker YAML file: USD oc apply -f <filename> Important The ConfigMap object in spec.config must be in the same namespace as the Broker object: apiVersion: v1 kind: ConfigMap metadata: name: my-config namespace: my-namespace data: ... After the creation of the first Broker object with the KafkaNamespaced class, the kafka-broker-receiver and kafka-broker-dispatcher deployments are created in the namespace. Subsequently, all brokers with the KafkaNamespaced class in the same namespace will use the same data plane. If no brokers with the KafkaNamespaced class exist in the namespace, the data plane in the namespace is deleted. 4.6.2. Configuring Apache Kafka broker settings You can configure the replication factor, bootstrap servers, and the number of topic partitions for a Kafka broker, by creating a config map and referencing this config map in the Kafka Broker object. Knative Eventing supports the full set of topic config options that Kafka supports. To set these options, you must add a key to the ConfigMap with the default.topic.config. prefix. Prerequisites You have cluster or dedicated administrator permissions on OpenShift Container Platform. The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource (CR) are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project that has the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have installed the OpenShift CLI ( oc ). Procedure Modify the kafka-broker-config config map, or create your own config map that contains the following configuration: apiVersion: v1 kind: ConfigMap metadata: name: <config_map_name> 1 namespace: <namespace> 2 data: default.topic.partitions: <integer> 3 default.topic.replication.factor: <integer> 4 bootstrap.servers: <list_of_servers> 5 default.topic.config.<config_option>: <value> 6 1 The config map name. 2 The namespace where the config map exists. 3 The number of topic partitions for the Kafka broker. This controls how quickly events can be sent to the broker. A higher number of partitions requires greater compute resources. 4 The replication factor of topic messages. This prevents against data loss. A higher replication factor requires greater compute resources and more storage. 5 A comma separated list of bootstrap servers. This can be inside or outside of the OpenShift Container Platform cluster, and is a list of Kafka clusters that the broker receives events from and sends events to. 6 A topic config option. For more information, see the full set of possible options and values . Important The default.topic.replication.factor value must be less than or equal to the number of Kafka broker instances in your cluster. For example, if you only have one Kafka broker, the default.topic.replication.factor value should not be more than "1" . Example Kafka broker config map apiVersion: v1 kind: ConfigMap metadata: name: kafka-broker-config namespace: knative-eventing data: default.topic.partitions: "10" default.topic.replication.factor: "3" bootstrap.servers: "my-cluster-kafka-bootstrap.kafka:9092" default.topic.config.retention.ms: "3600" Apply the config map: USD oc apply -f <config_map_filename> Specify the config map for the Kafka Broker object: Example Broker object apiVersion: eventing.knative.dev/v1 kind: Broker metadata: name: <broker_name> 1 namespace: <namespace> 2 annotations: eventing.knative.dev/broker.class: Kafka 3 spec: config: apiVersion: v1 kind: ConfigMap name: <config_map_name> 4 namespace: <namespace> 5 ... 1 The broker name. 2 The namespace where the broker exists. 3 The broker class annotation. In this example, the broker is a Kafka broker that uses the class value Kafka . 4 The config map name. 5 The namespace where the config map exists. Apply the broker: USD oc apply -f <broker_filename> 4.6.3. Security configuration for the Knative broker implementation for Apache Kafka Kafka clusters are generally secured by using the TLS or SASL authentication methods. You can configure a Kafka broker or channel to work against a protected Red Hat AMQ Streams cluster by using TLS or SASL. Note Red Hat recommends that you enable both SASL and TLS together. 4.6.3.1. Configuring TLS authentication for Apache Kafka brokers Transport Layer Security (TLS) is used by Apache Kafka clients and servers to encrypt traffic between Knative and Kafka, as well as for authentication. TLS is the only supported method of traffic encryption for the Knative broker implementation for Apache Kafka. Prerequisites You have cluster or dedicated administrator permissions on OpenShift Container Platform. The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka CR are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have a Kafka cluster CA certificate stored as a .pem file. You have a Kafka cluster client certificate and a key stored as .pem files. Install the OpenShift CLI ( oc ). Procedure Create the certificate files as a secret in the knative-eventing namespace: USD oc create secret -n knative-eventing generic <secret_name> \ --from-literal=protocol=SSL \ --from-file=ca.crt=caroot.pem \ --from-file=user.crt=certificate.pem \ --from-file=user.key=key.pem Important Use the key names ca.crt , user.crt , and user.key . Do not change them. Edit the KnativeKafka CR and add a reference to your secret in the broker spec: apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: broker: enabled: true defaultConfig: authSecretName: <secret_name> ... 4.6.3.2. Configuring SASL authentication for Apache Kafka brokers Simple Authentication and Security Layer (SASL) is used by Apache Kafka for authentication. If you use SASL authentication on your cluster, users must provide credentials to Knative for communicating with the Kafka cluster; otherwise events cannot be produced or consumed. Prerequisites You have cluster or dedicated administrator permissions on OpenShift Container Platform. The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka CR are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have a username and password for a Kafka cluster. You have chosen the SASL mechanism to use, for example, PLAIN , SCRAM-SHA-256 , or SCRAM-SHA-512 . If TLS is enabled, you also need the ca.crt certificate file for the Kafka cluster. Install the OpenShift CLI ( oc ). Procedure Create the certificate files as a secret in the knative-eventing namespace: USD oc create secret -n knative-eventing generic <secret_name> \ --from-literal=protocol=SASL_SSL \ --from-literal=sasl.mechanism=<sasl_mechanism> \ --from-file=ca.crt=caroot.pem \ --from-literal=password="SecretPassword" \ --from-literal=user="my-sasl-user" Use the key names ca.crt , password , and sasl.mechanism . Do not change them. If you want to use SASL with public CA certificates, you must use the tls.enabled=true flag, rather than the ca.crt argument, when creating the secret. For example: USD oc create secret -n <namespace> generic <kafka_auth_secret> \ --from-literal=tls.enabled=true \ --from-literal=password="SecretPassword" \ --from-literal=saslType="SCRAM-SHA-512" \ --from-literal=user="my-sasl-user" Edit the KnativeKafka CR and add a reference to your secret in the broker spec: apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: broker: enabled: true defaultConfig: authSecretName: <secret_name> ... 4.6.4. Additional resources Red Hat AMQ Streams documentation TLS and SASL on Kafka 4.7. Managing brokers After you have created a broker, you can manage your broker by using Knative ( kn ) CLI commands, or by modifying it in the OpenShift Container Platform web console. 4.7.1. Managing brokers using the CLI The Knative ( kn ) CLI provides commands that can be used to describe and list existing brokers. 4.7.1.1. Listing existing brokers by using the Knative CLI Using the Knative ( kn ) CLI to list brokers provides a streamlined and intuitive user interface. You can use the kn broker list command to list existing brokers in your cluster by using the Knative CLI. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. Procedure List all existing brokers: USD kn broker list Example output NAME URL AGE CONDITIONS READY REASON default http://broker-ingress.knative-eventing.svc.cluster.local/test/default 45s 5 OK / 5 True 4.7.1.2. Describing an existing broker by using the Knative CLI Using the Knative ( kn ) CLI to describe brokers provides a streamlined and intuitive user interface. You can use the kn broker describe command to print information about existing brokers in your cluster by using the Knative CLI. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. Procedure Describe an existing broker: USD kn broker describe <broker_name> Example command using default broker USD kn broker describe default Example output Name: default Namespace: default Annotations: eventing.knative.dev/broker.class=MTChannelBasedBroker, eventing.knative.dev/creato ... Age: 22s Address: URL: http://broker-ingress.knative-eventing.svc.cluster.local/default/default Conditions: OK TYPE AGE REASON ++ Ready 22s ++ Addressable 22s ++ FilterReady 22s ++ IngressReady 22s ++ TriggerChannelReady 22s 4.7.2. Connect a broker to a sink using the Developer perspective You can connect a broker to an event sink in the OpenShift Container Platform Developer perspective by creating a trigger. Prerequisites The OpenShift Serverless Operator, Knative Serving, and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console and are in the Developer perspective. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have created a sink, such as a Knative service or channel. You have created a broker. Procedure In the Topology view, point to the broker that you have created. An arrow appears. Drag the arrow to the sink that you want to connect to the broker. This action opens the Add Trigger dialog box. In the Add Trigger dialog box, enter a name for the trigger and click Add . Verification You can verify that the broker is connected to the sink by viewing the Topology page. In the Developer perspective, navigate to Topology . Click the line that connects the broker to the sink to see details about the trigger in the Details panel.
[ "kn broker create <broker_name>", "kn broker list", "NAME URL AGE CONDITIONS READY REASON default http://broker-ingress.knative-eventing.svc.cluster.local/test/default 45s 5 OK / 5 True", "apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: annotations: eventing.knative.dev/injection: enabled name: <trigger_name> spec: broker: default subscriber: 1 ref: apiVersion: serving.knative.dev/v1 kind: Service name: <service_name>", "oc apply -f <filename>", "oc -n <namespace> get broker default", "NAME READY REASON URL AGE default True http://broker-ingress.knative-eventing.svc.cluster.local/test/default 3m56s", "oc label namespace <namespace> eventing.knative.dev/injection=enabled", "oc -n <namespace> get broker <broker_name>", "oc -n default get broker default", "NAME READY REASON URL AGE default True http://broker-ingress.knative-eventing.svc.cluster.local/test/default 3m56s", "oc label namespace <namespace> eventing.knative.dev/injection-", "oc -n <namespace> delete broker <broker_name>", "oc -n <namespace> get broker <broker_name>", "oc -n default get broker default", "No resources found. Error from server (NotFound): brokers.eventing.knative.dev \"default\" not found", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: 1 config-br-default-channel: channel-template-spec: | apiVersion: messaging.knative.dev/v1beta1 kind: KafkaChannel 2 spec: numPartitions: 6 3 replicationFactor: 3 4", "oc apply -f <filename>", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: defaultBrokerClass: Kafka 1 config: 2 config-br-defaults: 3 default-br-config: | clusterDefault: 4 brokerClass: Kafka apiVersion: v1 kind: ConfigMap name: kafka-broker-config 5 namespace: knative-eventing 6 namespaceDefaults: 7 my-namespace: brokerClass: MTChannelBasedBroker apiVersion: v1 kind: ConfigMap name: config-br-default-channel 8 namespace: knative-eventing 9", "apiVersion: eventing.knative.dev/v1 kind: Broker metadata: annotations: eventing.knative.dev/broker.class: Kafka 1 name: example-kafka-broker spec: config: apiVersion: v1 kind: ConfigMap name: kafka-broker-config 2 namespace: knative-eventing", "oc apply -f <filename>", "apiVersion: eventing.knative.dev/v1 kind: Broker metadata: annotations: eventing.knative.dev/broker.class: Kafka 1 kafka.eventing.knative.dev/external.topic: <topic_name> 2", "oc apply -f <filename>", "apiVersion: eventing.knative.dev/v1 kind: Broker metadata: annotations: eventing.knative.dev/broker.class: KafkaNamespaced 1 name: default namespace: my-namespace 2 spec: config: apiVersion: v1 kind: ConfigMap name: my-config 3", "oc apply -f <filename>", "apiVersion: v1 kind: ConfigMap metadata: name: my-config namespace: my-namespace data:", "apiVersion: v1 kind: ConfigMap metadata: name: <config_map_name> 1 namespace: <namespace> 2 data: default.topic.partitions: <integer> 3 default.topic.replication.factor: <integer> 4 bootstrap.servers: <list_of_servers> 5 default.topic.config.<config_option>: <value> 6", "apiVersion: v1 kind: ConfigMap metadata: name: kafka-broker-config namespace: knative-eventing data: default.topic.partitions: \"10\" default.topic.replication.factor: \"3\" bootstrap.servers: \"my-cluster-kafka-bootstrap.kafka:9092\" default.topic.config.retention.ms: \"3600\"", "oc apply -f <config_map_filename>", "apiVersion: eventing.knative.dev/v1 kind: Broker metadata: name: <broker_name> 1 namespace: <namespace> 2 annotations: eventing.knative.dev/broker.class: Kafka 3 spec: config: apiVersion: v1 kind: ConfigMap name: <config_map_name> 4 namespace: <namespace> 5", "oc apply -f <broker_filename>", "oc create secret -n knative-eventing generic <secret_name> --from-literal=protocol=SSL --from-file=ca.crt=caroot.pem --from-file=user.crt=certificate.pem --from-file=user.key=key.pem", "apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: broker: enabled: true defaultConfig: authSecretName: <secret_name>", "oc create secret -n knative-eventing generic <secret_name> --from-literal=protocol=SASL_SSL --from-literal=sasl.mechanism=<sasl_mechanism> --from-file=ca.crt=caroot.pem --from-literal=password=\"SecretPassword\" --from-literal=user=\"my-sasl-user\"", "oc create secret -n <namespace> generic <kafka_auth_secret> --from-literal=tls.enabled=true --from-literal=password=\"SecretPassword\" --from-literal=saslType=\"SCRAM-SHA-512\" --from-literal=user=\"my-sasl-user\"", "apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: broker: enabled: true defaultConfig: authSecretName: <secret_name>", "kn broker list", "NAME URL AGE CONDITIONS READY REASON default http://broker-ingress.knative-eventing.svc.cluster.local/test/default 45s 5 OK / 5 True", "kn broker describe <broker_name>", "kn broker describe default", "Name: default Namespace: default Annotations: eventing.knative.dev/broker.class=MTChannelBasedBroker, eventing.knative.dev/creato Age: 22s Address: URL: http://broker-ingress.knative-eventing.svc.cluster.local/default/default Conditions: OK TYPE AGE REASON ++ Ready 22s ++ Addressable 22s ++ FilterReady 22s ++ IngressReady 22s ++ TriggerChannelReady 22s" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/eventing/brokers