title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 2. Understanding ephemeral storage
|
Chapter 2. Understanding ephemeral storage 2.1. Overview In addition to persistent storage, pods and containers can require ephemeral or transient local storage for their operation. The lifetime of this ephemeral storage does not extend beyond the life of the individual pod, and this ephemeral storage cannot be shared across pods. Pods use ephemeral local storage for scratch space, caching, and logs. Issues related to the lack of local storage accounting and isolation include the following: Pods cannot detect how much local storage is available to them. Pods cannot request guaranteed local storage. Local storage is a best-effort resource. Pods can be evicted due to other pods filling the local storage, after which new pods are not admitted until sufficient storage is reclaimed. Unlike persistent volumes, ephemeral storage is unstructured and the space is shared between all pods running on a node, in addition to other uses by the system, the container runtime, and OpenShift Container Platform. The ephemeral storage framework allows pods to specify their transient local storage needs. It also allows OpenShift Container Platform to schedule pods where appropriate, and to protect the node against excessive use of local storage. While the ephemeral storage framework allows administrators and developers to better manage local storage, I/O throughput and latency are not directly effected. 2.2. Types of ephemeral storage Ephemeral local storage is always made available in the primary partition. There are two basic ways of creating the primary partition: root and runtime. Root This partition holds the kubelet root directory, /var/lib/kubelet/ by default, and /var/log/ directory. This partition can be shared between user pods, the OS, and Kubernetes system daemons. This partition can be consumed by pods through EmptyDir volumes, container logs, image layers, and container-writable layers. Kubelet manages shared access and isolation of this partition. This partition is ephemeral, and applications cannot expect any performance SLAs, such as disk IOPS, from this partition. Runtime This is an optional partition that runtimes can use for overlay file systems. OpenShift Container Platform attempts to identify and provide shared access along with isolation to this partition. Container image layers and writable layers are stored here. If the runtime partition exists, the root partition does not hold any image layer or other writable storage. 2.3. Ephemeral storage management Cluster administrators can manage ephemeral storage within a project by setting quotas that define the limit ranges and number of requests for ephemeral storage across all pods in a non-terminal state. Developers can also set requests and limits on this compute resource at the pod and container level. You can manage local ephemeral storage by specifying requests and limits. Each container in a pod can specify the following: spec.containers[].resources.limits.ephemeral-storage spec.containers[].resources.requests.ephemeral-storage 2.3.1. Ephemeral storage limits and requests units Limits and requests for ephemeral storage are measured in byte quantities. You can express storage as a plain integer or as a fixed-point number using one of these suffixes: E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For example, the following quantities all represent approximately the same value: 128974848, 129e6, 129M, and 123Mi. Important The suffixes for each byte quantity are case-sensitive. Be sure to use the correct case. Use the case-sensitive "M", such as used in "400M" to set the request at 400 megabytes. Use the case-sensitive "400Mi" to request 400 mebibytes. If you specify "400m" of ephemeral storage, the storage requests is only 0.4 bytes. 2.3.2. Ephemeral storage requests and limits example The following example configuration file shows a pod with two containers: Each container requests 2GiB of local ephemeral storage. Each container has a limit of 4GiB of local ephemeral storage. At the pod level, kubelet works out an overall pod storage limit by adding up the limits of all the containers in that pod. In this case, the total storage usage at the pod level is the sum of the disk usage from all containers plus the pod's emptyDir volumes. Therefore, the pod has a request of 4GiB of local ephemeral storage, and a limit of 8GiB of local ephemeral storage. Example ephemeral storage configuration with quotas and limits apiVersion: v1 kind: Pod metadata: name: frontend spec: containers: - name: app image: images.my-company.example/app:v4 resources: requests: ephemeral-storage: "2Gi" 1 limits: ephemeral-storage: "4Gi" 2 volumeMounts: - name: ephemeral mountPath: "/tmp" - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: ephemeral-storage: "2Gi" limits: ephemeral-storage: "4Gi" volumeMounts: - name: ephemeral mountPath: "/tmp" volumes: - name: ephemeral emptyDir: {} 1 Container request for local ephemeral storage. 2 Container limit for local ephemeral storage. 2.3.3. Ephemeral storage configuration effects pod scheduling and eviction The settings in the pod spec affect both how the scheduler makes a decision about scheduling pods and when kubelet evicts pods. First, the scheduler ensures that the sum of the resource requests of the scheduled containers is less than the capacity of the node. In this case, the pod can be assigned to a node only if the node's available ephemeral storage (allocatable resource) is more than 4GiB. Second, at the container level, because the first container sets a resource limit, kubelet eviction manager measures the disk usage of this container and evicts the pod if the storage usage of the container exceeds its limit (4GiB). The kubelet eviction manager also marks the pod for eviction if the total usage exceeds the overall pod storage limit (8GiB). For information about defining quotas for projects, see Quota setting per project . 2.4. Monitoring ephemeral storage You can use /bin/df as a tool to monitor ephemeral storage usage on the volume where ephemeral container data is located, which is /var/lib/kubelet and /var/lib/containers . The available space for only /var/lib/kubelet is shown when you use the df command if /var/lib/containers is placed on a separate disk by the cluster administrator. To show the human-readable values of used and available space in /var/lib , enter the following command: USD df -h /var/lib The output shows the ephemeral storage usage in /var/lib : Example output Filesystem Size Used Avail Use% Mounted on /dev/disk/by-partuuid/4cd1448a-01 69G 32G 34G 49% /
|
[
"apiVersion: v1 kind: Pod metadata: name: frontend spec: containers: - name: app image: images.my-company.example/app:v4 resources: requests: ephemeral-storage: \"2Gi\" 1 limits: ephemeral-storage: \"4Gi\" 2 volumeMounts: - name: ephemeral mountPath: \"/tmp\" - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: ephemeral-storage: \"2Gi\" limits: ephemeral-storage: \"4Gi\" volumeMounts: - name: ephemeral mountPath: \"/tmp\" volumes: - name: ephemeral emptyDir: {}",
"df -h /var/lib",
"Filesystem Size Used Avail Use% Mounted on /dev/disk/by-partuuid/4cd1448a-01 69G 32G 34G 49% /"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/storage/understanding-ephemeral-storage
|
Installing on IBM Cloud (Classic)
|
Installing on IBM Cloud (Classic) OpenShift Container Platform 4.17 Installing OpenShift Container Platform IBM Cloud Bare Metal (Classic) Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_ibm_cloud_classic/index
|
Chapter 19. KubeScheduler [operator.openshift.io/v1]
|
Chapter 19. KubeScheduler [operator.openshift.io/v1] Description KubeScheduler provides information to configure an operator to manage scheduler. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 19.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the Kubernetes Scheduler status object status is the most recently observed status of the Kubernetes Scheduler 19.1.1. .spec Description spec is the specification of the desired behavior of the Kubernetes Scheduler Type object Property Type Description failedRevisionLimit integer failedRevisionLimit is the number of failed static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) forceRedeploymentReason string forceRedeploymentReason can be used to force the redeployment of the operand by providing a unique string. This provides a mechanism to kick a previously failed deployment and provide a reason why you think it will work this time instead of failing again on the same config. logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". succeededRevisionLimit integer succeededRevisionLimit is the number of successful static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 19.1.2. .status Description status is the most recently observed status of the Kubernetes Scheduler Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. latestAvailableRevision integer latestAvailableRevision is the deploymentID of the most recent deployment latestAvailableRevisionReason string latestAvailableRevisionReason describe the detailed reason for the most recent deployment nodeStatuses array nodeStatuses track the deployment values and errors across individual nodes nodeStatuses[] object NodeStatus provides information about the current state of a particular node managed by this operator. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 19.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 19.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 19.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 19.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 19.1.7. .status.nodeStatuses Description nodeStatuses track the deployment values and errors across individual nodes Type array 19.1.8. .status.nodeStatuses[] Description NodeStatus provides information about the current state of a particular node managed by this operator. Type object Property Type Description currentRevision integer currentRevision is the generation of the most recently successful deployment lastFailedCount integer lastFailedCount is how often the installer pod of the last failed revision failed. lastFailedReason string lastFailedReason is a machine readable failure reason string. lastFailedRevision integer lastFailedRevision is the generation of the deployment we tried and failed to deploy. lastFailedRevisionErrors array (string) lastFailedRevisionErrors is a list of human readable errors during the failed deployment referenced in lastFailedRevision. lastFailedTime string lastFailedTime is the time the last failed revision failed the last time. lastFallbackCount integer lastFallbackCount is how often a fallback to a revision happened. nodeName string nodeName is the name of the node targetRevision integer targetRevision is the generation of the deployment we're trying to apply 19.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/kubeschedulers DELETE : delete collection of KubeScheduler GET : list objects of kind KubeScheduler POST : create a KubeScheduler /apis/operator.openshift.io/v1/kubeschedulers/{name} DELETE : delete a KubeScheduler GET : read the specified KubeScheduler PATCH : partially update the specified KubeScheduler PUT : replace the specified KubeScheduler /apis/operator.openshift.io/v1/kubeschedulers/{name}/status GET : read status of the specified KubeScheduler PATCH : partially update status of the specified KubeScheduler PUT : replace status of the specified KubeScheduler 19.2.1. /apis/operator.openshift.io/v1/kubeschedulers HTTP method DELETE Description delete collection of KubeScheduler Table 19.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind KubeScheduler Table 19.2. HTTP responses HTTP code Reponse body 200 - OK KubeSchedulerList schema 401 - Unauthorized Empty HTTP method POST Description create a KubeScheduler Table 19.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.4. Body parameters Parameter Type Description body KubeScheduler schema Table 19.5. HTTP responses HTTP code Reponse body 200 - OK KubeScheduler schema 201 - Created KubeScheduler schema 202 - Accepted KubeScheduler schema 401 - Unauthorized Empty 19.2.2. /apis/operator.openshift.io/v1/kubeschedulers/{name} Table 19.6. Global path parameters Parameter Type Description name string name of the KubeScheduler HTTP method DELETE Description delete a KubeScheduler Table 19.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 19.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified KubeScheduler Table 19.9. HTTP responses HTTP code Reponse body 200 - OK KubeScheduler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified KubeScheduler Table 19.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.11. HTTP responses HTTP code Reponse body 200 - OK KubeScheduler schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified KubeScheduler Table 19.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.13. Body parameters Parameter Type Description body KubeScheduler schema Table 19.14. HTTP responses HTTP code Reponse body 200 - OK KubeScheduler schema 201 - Created KubeScheduler schema 401 - Unauthorized Empty 19.2.3. /apis/operator.openshift.io/v1/kubeschedulers/{name}/status Table 19.15. Global path parameters Parameter Type Description name string name of the KubeScheduler HTTP method GET Description read status of the specified KubeScheduler Table 19.16. HTTP responses HTTP code Reponse body 200 - OK KubeScheduler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified KubeScheduler Table 19.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.18. HTTP responses HTTP code Reponse body 200 - OK KubeScheduler schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified KubeScheduler Table 19.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.20. Body parameters Parameter Type Description body KubeScheduler schema Table 19.21. HTTP responses HTTP code Reponse body 200 - OK KubeScheduler schema 201 - Created KubeScheduler schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/operator_apis/kubescheduler-operator-openshift-io-v1
|
8.3.5. Anticipating Your Future Needs
|
8.3.5. Anticipating Your Future Needs Depending upon your target and resources, there are many tools available. There are tools for wireless networks, Novell networks, Windows systems, Linux systems, and more. Another essential part of performing assessments may include reviewing physical security, personnel screening, or voice/PBX network assessment. New concepts, such as war walking - scanning the perimeter of your enterprise's physical structures for wireless network vulnerabilities - are some emerging concepts that you can investigate and, if needed, incorporate into your assessments. Imagination and exposure are the only limits of planning and conducting vulnerability assessments.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s2-vuln-tools-concept
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.422/making-open-source-more-inclusive
|
Chapter 1. Introduction
|
Chapter 1. Introduction This document describes how to configure Red Hat OpenStack Platform to use a Fujitsu ETERNUS Disk Storage System as a back end for the Block Storage service. The document covers how to define a Fibre Channel and iSCSI back end provided by an ETERNUS device on an overcloud deployment. This process involves defining both back ends as a custom back end for the Block Storage service. By default, Controller nodes contain the Block Storage service. Prerequisites You intend to use only Fujitsu ETERNUS Disk Storage System devices and drivers for Block Storage back ends. You can use the director installation user , that you create with the overcloud deployment. For more information about creating the stack user, see Preparing the undercloud in the Director Installation and Usage guide. You have access to an Admin account on the ETERNUS device through the ETERNUS Web GUI or CLI. Red Hat supports using Fibre Channel or iSCSI interfaces, and the respective drivers and settings, with a Fujitsu ETERNUS device. Note For more information about defining a custom back end, see the Custom Block Storage Back End Deployment Guide .
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/fujitsu_eternus_back_end_guide/intro
|
Chapter 10. Improving user access security
|
Chapter 10. Improving user access security You can enable secure role-based access control (SRBAC) in Red Hat OpenStack Platform 17. The SRBAC model has three personas, based on three roles existing within the project scope. 10.1. SRBAC personas Personas are a combination of roles and the scope to which they belong. When you deploy Red Hat OpenStack Platform 17, you can assign any of the personas from the project scope. 10.1.1. Red Hat OpenStack Platform SRBAC roles Currently, three different roles are available within the project scope. admin The admin role includes all create, read, update, or delete operations on a resource or API. member The member role is allowed to create, read, update, and delete resources that are owned by the scope in which they are a member. reader The reader role is for read-only operations, regardless of the scope it is applied to. This role can view resources across the entirety of the scope to which it is applied. 10.1.2. Red Hat OpenStack Platform SRBAC scope The scope is the context in which operations are performed. Only the project scope is available in Red Hat OpenStack Platform 17. The project scope is a contained subset of APIs for isolated self-service resources within OpenStack. 10.1.3. Red Hat OpenStack Platform SRBAC personas Admin Because the project admin persona is the only administrative persona available, Red Hat OpenStack Platform 17 includes modified policies that grant the project admin persona the highest level of authorization . This persona includes create, read, update and delete operations on resources across projects, which includes adding and removing users and other projects. Note This persona is expected to change in scope with future development. This role implies all permissions granted to project members and project readers. Project member The project member persona is for users who are granted permission to consume resources within the project scope. This persona can create, list, update, and delete resources within the project to which they are assigned. This persona implies all permissions granted to project readers. Project reader The project reader persona is for users who are granted permission to view non-sensitive resources in the project. On projects, assign the reader role to end users who need to inspect or view resources, or to auditors, who only need to view project-specific resources within a single project for the purposes of an audit The project-reader persona will not address all auditing use cases. Additional personas based on the system or domain scopes are in development and are not available for use. Note The Image service (glance) does not support SRBAC permissions for metadef APIs. The default policies in RHOSP 17.1 for Image service metadef APIs are for the admin only. 10.2. Activating secure role-based access control When you activate secure role-based Authentication, you are activating a new set of policy files that define the scope of permissions assigned to users in your Red Hat OpenStack Platform environment. Prerequisites You have an installed Red Hat OpenStack Platform director environment. Procedure Include the enable-secure-rbac.yaml environment file in the deployment script when deploying Red Hat OpenStack Platform: 10.3. Assigning roles in an SRBAC environment With SRBAC on Red Hat OpenStack Platform, you can assign users to the role of admin , project-member , or project-reader . Prerequisites You have deployed Red Hat OpenStack Platform with secure role based authentication (SRBAC). Procedure Use the openstack role add command using the following syntax: Assign the admin role: Assign the member role: Assign the reader role: Replace <user> with an existing user to apply the role. Replace <domain> with the domain to which the role applies. Replace <project> with the project for which the user is being granted the role. Replace <project-domain> with the domain that the project is in.
|
[
"openstack overcloud deploy --templates ... -e /usr/share/openstack-tripleo-heat-templates/environments/enable-secure-rbac.yaml",
"openstack role add --user <user> --user-domain <domain> --project <project> --project-domain <project-domain> admin",
"openstack role add --user <user> --user-domain <domain> --project <project> --project-domain <project-domain> member",
"openstack role add --user <user> --user-domain <domain> --project <project> --project-domain <project-domain> reader"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/hardening_red_hat_openstack_platform/assembly_improving-user-access-security_security_and_hardening
|
Chapter 23. Managing self-service rules using the IdM Web UI
|
Chapter 23. Managing self-service rules using the IdM Web UI Learn about self-service rules in Identity Management (IdM) and how to create and edit self-service access rules in the web interface (IdM Web UI). 23.1. Self-service access control in IdM Self-service access control rules define which operations an Identity Management (IdM) entity can perform on its IdM Directory Server entry: for example, IdM users have the ability to update their own passwords. This method of control allows an authenticated IdM entity to edit specific attributes within its LDAP entry, but does not allow add or delete operations on the entire entry. Warning Be careful when working with self-service access control rules: configuring access control rules improperly can inadvertently elevate an entity's privileges. 23.2. Creating self-service rules using the IdM Web UI Follow this procedure to create self-service access rules in IdM using the web interface (IdM Web UI). Prerequisites Administrator privileges for managing IdM or the User Administrator role. You are logged-in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . Procedure Open the Role-Based Access Control submenu in the IPA Server tab and select Self Service Permissions . Click Add at the upper-right of the list of the self-service access rules: The Add Self Service Permission window opens. Enter the name of the new self-service rule in the Self-service name field. Spaces are allowed: Select the check boxes to the attributes you want users to be able to edit. Optional: If an attribute you want to provide access to is not listed, you can add a listing for it: Click the Add button. Enter the attribute name in the Attribute text field of the following Add Custom Attribute window. Click the OK button to add the attribute Verify that the new attribute is selected Click the Add button at the bottom of the form to save the new self-service rule. Alternatively, you can save and continue editing the self-service rule by clicking the Add and Edit button, or save and add further rules by clicking the Add and Add another button. 23.3. Editing self-service rules using the IdM Web UI Follow this procedure to edit self-service access rules in IdM using the web interface (IdM Web UI). Prerequisites Administrator privileges for managing IdM or the User Administrator role. You are logged-in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . Procedure Open the Role-Based Access Control submenu in the IPA Server tab and select Self Service Permissions . Click on the name of the self-service rule you want to modify. The edit page only allows you to edit the list of attributes to you want to add or remove to the self-service rule. Select or deselect the appropriate check boxes. Click the Save button to save your changes to the self-service rule. 23.4. Deleting self-service rules using the IdM Web UI Follow this procedure to delete self-service access rules in IdM using the web interface (IdM Web UI). Prerequisites Administrator privileges for managing IdM or the User Administrator role. You are logged-in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . Procedure Open the Role-Based Access Control submenu in the IPA Server tab and select Self Service Permissions . Select the check box to the rule you want to delete, then click on the Delete button on the right of the list. A dialog opens, click on Delete to confirm.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/managing-self-service-rules-in-idm-using-the-idm-web-ui_configuring-and-managing-idm
|
Chapter 10. Hardware Enablement
|
Chapter 10. Hardware Enablement Broadcom 5880 smart card readers with the updated firmware are now supported This update includes the USB ID entries for the updated firmware version of the Broadcom 5880 smart card readers and Red Hat Enterprise Linux is now able to properly recognize and use these readers. Note that users with the Broadcom 5880 smart card readers using older firmware versions should update the firmware. See the Support section at www.dell.com for more information about the updating process. (BZ#1435668) fwupd now supports Synaptics MST hubs Red Hat Enterprise Linux 7.5 adds a plug-in for Synaptics MST hubs to the fwupd utility. This plug-in enables you to flash firmware and query firmware information for this device. (BZ#1420913) kernel-rt sources updated The kernel-rt sources have been upgraded to be based on the latest Red Hat Enterprise Linux kernel source tree, which provides a number of bug fixes and enhancements over the version. (BZ# 1462329 ) Improved RT throttling mechanism The current real-time throttling mechanism prevents the starvation of non-real-time tasks by CPU intensive real-time tasks. When a real-time run queue is throttled, it allows non-real-time tasks to run or if there are none, the CPU goes idle. To safely maximize CPU usage by decreasing the CPU idle time, the RT_RUNTIME_GREED scheduler feature has been implemented. When enabled, this feature checks if non-real-time tasks are starving before throttling the real-time task. As a result, the RT_RUNTIME_GREED scheduler option guarantees some run time on all CPUs for the non-real-time tasks, while keeping the real-time tasks running as much as possible. (BZ# 1401061 ) VMware Paravirtual RDMA Driver This enhancement update adds VMware Paravirtual RDMA Driver to Red Hat Enterprise Linux. This feature allows VMware users to deploy and use Red Hat Enterprise Linux-based VMs with PVRDMA devices. (BZ#1454965) opal-prd rebased to version 5.9 The opal-prd daemon, which handles hardware-specific recovery processes, has been rebased to version 5.9. This enhancement update includes the following important fixes and notable enhancements: flush after logging to stdio in debug mode fixes for memory leaks fix for opal-prd command line options fix for occ_reset call API comment regarding nanosleep ranges the pnor file is no longer passed while starting opal-prd on FSP system host, pnor access interface is disabled add support for runtime OCC load/start in ZZ Users of opal-prd are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. (BZ#1456536) libreswan now supports NIC offloading This update of the libreswan packages introduces support for the network interface controller (NIC) offloading. Libreswan now automatically detects the NIC hardware offload support, and the nic-offload=auto|yes|no option has been added for manual setup of this feature. (BZ#1463062) Trusted Computing Group TPM 2.0 System API library and management utilities available The following packages, which handle the Trusted Computing Group's Trusted Platform Module (TPM) 2.0 hardware and which were previously available as a Technology Preview, are now fully supported: The tpm2-tss package adds the Intel implementation of the TPM 2.0 System API library. This library enables programs to interact with TPM 2.0 devices. The tpm2-tools package adds a set of utilities for management and utilization of TPM 2.0 devices from user space. (BZ# 1463097 , BZ#1463100) new packages: tpm2-abrmd This update adds the tpm2-abrmd packages to Red Hat Enterprise Linux 7. The tpm2-abrmd packages provide a system service that implemens the Trusted Plaform Module (TPM) 2.0 Access Broker (TAB) and Resource Manager (RM) specification from the Trusted Computing Group. (BZ#1492466)
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/new_features_hardware_enablement
|
Chapter 40. Endpoint Interface
|
Chapter 40. Endpoint Interface Abstract This chapter describes how to implement the Endpoint interface, which is an essential step in the implementation of a Apache Camel component. 40.1. The Endpoint Interface Overview An instance of org.apache.camel.Endpoint type encapsulates an endpoint URI, and it also serves as a factory for Consumer , Producer , and Exchange objects. There are three different approaches to implementing an endpoint: Event-driven scheduled poll polling These endpoint implementation patterns complement the corresponding patterns for implementing a consumer - see Section 41.2, "Implementing the Consumer Interface" . Figure 40.1, "Endpoint Inheritance Hierarchy" shows the relevant Java interfaces and classes that make up the Endpoint inheritance hierarchy. Figure 40.1. Endpoint Inheritance Hierarchy The Endpoint interface Example 40.1, "Endpoint Interface" shows the definition of the org.apache.camel.Endpoint interface. Example 40.1. Endpoint Interface Endpoint methods The Endpoint interface defines the following methods: isSingleton() - Returns true , if you want to ensure that each URI maps to a single endpoint within a CamelContext. When this property is true , multiple references to the identical URI within your routes always refer to a single endpoint instance. When this property is false , on the other hand, multiple references to the same URI within your routes refer to distinct endpoint instances. Each time you refer to the URI in a route, a new endpoint instance is created. getEndpointUri() - Returns the endpoint URI of this endpoint. getEndpointKey() - Used by org.apache.camel.spi.LifecycleStrategy when registering the endpoint. getCamelContext() - return a reference to the CamelContext instance to which this endpoint belongs. setCamelContext() - Sets the CamelContext instance to which this endpoint belongs. configureProperties() - Stores a copy of the parameter map that is used to inject parameters when creating a new Consumer instance. isLenientProperties() - Returns true to indicate that the URI is allowed to contain unknown parameters (that is, parameters that cannot be injected on the Endpoint or the Consumer class). Normally, this method should be implemented to return false . createExchange() - An overloaded method with the following variants: Exchange createExchange() - Creates a new exchange instance with a default exchange pattern setting. Exchange createExchange(ExchangePattern pattern) - Creates a new exchange instance with the specified exchange pattern. Exchange createExchange(Exchange exchange) - Converts the given exchange argument to the type of exchange needed for this endpoint. If the given exchange is not already of the correct type, this method copies it into a new instance of the correct type. A default implementation of this method is provided in the DefaultEndpoint class. createProducer() - Factory method used to create new Producer instances. createConsumer() - Factory method to create new event-driven consumer instances. The processor argument is a reference to the first processor in the route. createPollingConsumer() - Factory method to create new polling consumer instances. Endpoint singletons In order to avoid unnecessary overhead, it is a good idea to create a single endpoint instance for all endpoints that have the same URI (within a CamelContext). You can enforce this condition by implementing isSingleton() to return true . Note In this context, same URI means that two URIs are the same when compared using string equality. In principle, it is possible to have two URIs that are equivalent, though represented by different strings. In that case, the URIs would not be treated as the same. 40.2. Implementing the Endpoint Interface Alternative ways of implementing an endpoint The following alternative endpoint implementation patterns are supported: Event-driven endpoint implementation Scheduled poll endpoint implementation Polling endpoint implementation Event-driven endpoint implementation If your custom endpoint conforms to the event-driven pattern (see Section 38.1.3, "Consumer Patterns and Threading" ), it is implemented by extending the abstract class, org.apache.camel.impl.DefaultEndpoint , as shown in Example 40.2, "Implementing DefaultEndpoint" . Example 40.2. Implementing DefaultEndpoint 1 Implement an event-driven custom endpoint, CustomEndpoint , by extending the DefaultEndpoint class. 2 You must have at least one constructor that takes the endpoint URI, endpointUri , and the parent component reference, component , as arguments. 3 Implement the createProducer() factory method to create producer endpoints. 4 Implement the createConsumer() factory method to create event-driven consumer instances. 5 In general, it is not necessary to override the createExchange() methods. The implementations inherited from DefaultEndpoint create a DefaultExchange object by default, which can be used in any Apache Camel component. If you need to initialize some exchange properties in the DefaultExchange object, however, it is appropriate to override the createExchange() methods here in order to add the exchange property settings. Important Do not override the createPollingConsumer() method. The DefaultEndpoint class provides default implementations of the following methods, which you might find useful when writing your custom endpoint code: getEndpointUri() - Returns the endpoint URI. getCamelContext() - Returns a reference to the CamelContext . getComponent() - Returns a reference to the parent component. createPollingConsumer() - Creates a polling consumer. The created polling consumer's functionality is based on the event-driven consumer. If you override the event-driven consumer method, createConsumer() , you get a polling consumer implementation. createExchange(Exchange e) - Converts the given exchange object, e , to the type required for this endpoint. This method creates a new endpoint using the overridden createExchange() endpoints. This ensures that the method also works for custom exchange types. Scheduled poll endpoint implementation If your custom endpoint conforms to the scheduled poll pattern (see Section 38.1.3, "Consumer Patterns and Threading" ) it is implemented by inheriting from the abstract class, org.apache.camel.impl.ScheduledPollEndpoint , as shown in Example 40.3, "ScheduledPollEndpoint Implementation" . Example 40.3. ScheduledPollEndpoint Implementation 1 Implement a scheduled poll custom endpoint, CustomEndpoint , by extending the ScheduledPollEndpoint class. 2 You must to have at least one constructor that takes the endpoint URI, endpointUri , and the parent component reference, component , as arguments. 3 Implement the createProducer() factory method to create a producer endpoint. 4 Implement the createConsumer() factory method to create a scheduled poll consumer instance. 5 The configureConsumer() method, defined in the ScheduledPollEndpoint base class, is responsible for injecting consumer query options into the consumer. See the section called "Consumer parameter injection" . 6 In general, it is not necessary to override the createExchange() methods. The implementations inherited from DefaultEndpoint create a DefaultExchange object by default, which can be used in any Apache Camel component. If you need to initialize some exchange properties in the DefaultExchange object, however, it is appropriate to override the createExchange() methods here in order to add the exchange property settings. Important Do not override the createPollingConsumer() method. Polling endpoint implementation If your custom endpoint conforms to the polling consumer pattern (see Section 38.1.3, "Consumer Patterns and Threading" ), it is implemented by inheriting from the abstract class, org.apache.camel.impl.DefaultPollingEndpoint , as shown in Example 40.4, "DefaultPollingEndpoint Implementation" . Example 40.4. DefaultPollingEndpoint Implementation Because this CustomEndpoint class is a polling endpoint, you must implement the createPollingConsumer() method instead of the createConsumer() method. The consumer instance returned from createPollingConsumer() must inherit from the PollingConsumer interface. For details of how to implement a polling consumer, see the section called "Polling consumer implementation" . Apart from the implementation of the createPollingConsumer() method, the steps for implementing a DefaultPollingEndpoint are similar to the steps for implementing a ScheduledPollEndpoint . See Example 40.3, "ScheduledPollEndpoint Implementation" for details. Implementing the BrowsableEndpoint interface If you want to expose the list of exchange instances that are pending in the current endpoint, you can implement the org.apache.camel.spi.BrowsableEndpoint interface, as shown in Example 40.5, "BrowsableEndpoint Interface" . It makes sense to implement this interface if the endpoint performs some sort of buffering of incoming events. For example, the Apache Camel SEDA endpoint implements the BrowsableEndpoint interface - see Example 40.6, "SedaEndpoint Implementation" . Example 40.5. BrowsableEndpoint Interface Example Example 40.6, "SedaEndpoint Implementation" shows a sample implementation of SedaEndpoint . The SEDA endpoint is an example of an event-driven endpoint . Incoming events are stored in a FIFO queue (an instance of java.util.concurrent.BlockingQueue ) and a SEDA consumer starts up a thread to read and process the events. The events themselves are represented by org.apache.camel.Exchange objects. Example 40.6. SedaEndpoint Implementation 1 The SedaEndpoint class follows the pattern for implementing an event-driven endpoint by extending the DefaultEndpoint class. The SedaEndpoint class also implements the BrowsableEndpoint interface, which provides access to the list of exchange objects in the queue. 2 Following the usual pattern for an event-driven consumer, SedaEndpoint defines a constructor that takes an endpoint argument, endpointUri , and a component reference argument, component . 3 Another constructor is provided, which delegates queue creation to the parent component instance. 4 The createProducer() factory method creates an instance of CollectionProducer , which is a producer implementation that adds events to the queue. 5 The createConsumer() factory method creates an instance of SedaConsumer , which is responsible for pulling events off the queue and processing them. 6 The getQueue() method returns a reference to the queue. 7 The isSingleton() method returns true , indicating that a single endpoint instance should be created for each unique URI string. 8 The getExchanges() method implements the corresponding abstract method from BrowsableEndpoint.
|
[
"package org.apache.camel; public interface Endpoint { boolean isSingleton(); String getEndpointUri(); String getEndpointKey(); CamelContext getCamelContext(); void setCamelContext(CamelContext context); void configureProperties(Map options); boolean isLenientProperties(); Exchange createExchange(); Exchange createExchange(ExchangePattern pattern); Exchange createExchange(Exchange exchange); Producer createProducer() throws Exception; Consumer createConsumer(Processor processor) throws Exception; PollingConsumer createPollingConsumer() throws Exception; }",
"import java.util.Map; import java.util.concurrent.BlockingQueue; import org.apache.camel.Component; import org.apache.camel.Consumer; import org.apache.camel.Exchange; import org.apache.camel.Processor; import org.apache.camel.Producer; import org.apache.camel.impl.DefaultEndpoint; import org.apache.camel.impl.DefaultExchange; public class CustomEndpoint extends DefaultEndpoint { 1 public CustomEndpoint (String endpointUri, Component component) { 2 super(endpointUri, component); // Do any other initialization } public Producer createProducer() throws Exception { 3 return new CustomProducer (this); } public Consumer createConsumer(Processor processor) throws Exception { 4 return new CustomConsumer (this, processor); } public boolean isSingleton() { return true; } // Implement the following methods, only if you need to set exchange properties. // public Exchange createExchange() { 5 return this.createExchange(getExchangePattern()); } public Exchange createExchange(ExchangePattern pattern) { Exchange result = new DefaultExchange(getCamelContext(), pattern); // Set exchange properties return result; } }",
"import org.apache.camel.Consumer; import org.apache.camel.Processor; import org.apache.camel.Producer; import org.apache.camel.ExchangePattern; import org.apache.camel.Message; import org.apache.camel.impl.ScheduledPollEndpoint; public class CustomEndpoint extends ScheduledPollEndpoint { 1 protected CustomEndpoint (String endpointUri, CustomComponent component) { 2 super(endpointUri, component); // Do any other initialization } public Producer createProducer() throws Exception { 3 Producer result = new CustomProducer (this); return result; } public Consumer createConsumer(Processor processor) throws Exception { 4 Consumer result = new CustomConsumer (this, processor); configureConsumer(result); 5 return result; } public boolean isSingleton() { return true; } // Implement the following methods, only if you need to set exchange properties. // public Exchange createExchange() { 6 return this.createExchange(getExchangePattern()); } public Exchange createExchange(ExchangePattern pattern) { Exchange result = new DefaultExchange(getCamelContext(), pattern); // Set exchange properties return result; } }",
"import org.apache.camel.Consumer; import org.apache.camel.Processor; import org.apache.camel.Producer; import org.apache.camel.ExchangePattern; import org.apache.camel.Message; import org.apache.camel.impl.DefaultPollingEndpoint; public class CustomEndpoint extends DefaultPollingEndpoint { public PollingConsumer createPollingConsumer() throws Exception { PollingConsumer result = new CustomConsumer (this); configureConsumer(result); return result; } // Do NOT implement createConsumer(). It is already implemented in DefaultPollingEndpoint. }",
"package org.apache.camel.spi; import java.util.List; import org.apache.camel.Endpoint; import org.apache.camel.Exchange; public interface BrowsableEndpoint extends Endpoint { List<Exchange> getExchanges(); }",
"package org.apache.camel.component.seda; import java.util.ArrayList; import java.util.List; import java.util.Map; import java.util.concurrent.BlockingQueue; import org.apache.camel.Component; import org.apache.camel.Consumer; import org.apache.camel.Exchange; import org.apache.camel.Processor; import org.apache.camel.Producer; import org.apache.camel.impl.DefaultEndpoint; import org.apache.camel.spi.BrowsableEndpoint; public class SedaEndpoint extends DefaultEndpoint implements BrowsableEndpoint { 1 private BlockingQueue<Exchange> queue; public SedaEndpoint(String endpointUri, Component component, BlockingQueue<Exchange> queue) { 2 super(endpointUri, component); this.queue = queue; } public SedaEndpoint(String uri, SedaComponent component, Map parameters) { 3 this(uri, component, component.createQueue(uri, parameters)); } public Producer createProducer() throws Exception { 4 return new CollectionProducer(this, getQueue()); } public Consumer createConsumer(Processor processor) throws Exception { 5 return new SedaConsumer(this, processor); } public BlockingQueue<Exchange> getQueue() { 6 return queue; } public boolean isSingleton() { 7 return true; } public List<Exchange> getExchanges() { 8 return new ArrayList<Exchange> getQueue()); } }"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/EndpointIntf
|
Chapter 8. Image tags overview
|
Chapter 8. Image tags overview An image tag refers to a label or identifier assigned to a specific version or variant of a container image. Container images are typically composed of multiple layers that represent different parts of the image. Image tags are used to differentiate between different versions of an image or to provide additional information about the image. Image tags have the following benefits: Versioning and Releases : Image tags allow you to denote different versions or releases of an application or software. For example, you might have an image tagged as v1.0 to represent the initial release and v1.1 for an updated version. This helps in maintaining a clear record of image versions. Rollbacks and Testing : If you encounter issues with a new image version, you can easily revert to a version by specifying its tag. This is helpful during debugging and testing phases. Development Environments : Image tags are beneficial when working with different environments. You might use a dev tag for a development version, qa for quality assurance testing, and prod for production, each with their respective features and configurations. Continuous Integration/Continuous Deployment (CI/CD) : CI/CD pipelines often utilize image tags to automate the deployment process. New code changes can trigger the creation of a new image with a specific tag, enabling seamless updates. Feature Branches : When multiple developers are working on different features or bug fixes, they can create distinct image tags for their changes. This helps in isolating and testing individual features. Customization : You can use image tags to customize images with different configurations, dependencies, or optimizations, while keeping track of each variant. Security and Patching : When security vulnerabilities are discovered, you can create patched versions of images with updated tags, ensuring that your systems are using the latest secure versions. Dockerfile Changes : If you modify the Dockerfile or build process, you can use image tags to differentiate between images built from the and updated Dockerfiles. Overall, image tags provide a structured way to manage and organize container images, enabling efficient development, deployment, and maintenance workflows. 8.1. Viewing image tag information by using the UI Use the following procedure to view image tag information using the v2 UI. Prerequisites You have pushed an image tag to a repository. Procedure On the v2 UI, click Repositories . Click the name of a repository. Click the name of a tag. You are taken to the Details page of that tag. The page reveals the following information: Name Repository Digest Vulnerabilities Creation Modified Size Labels How to fetch the image tag Click Security Report to view the tag's vulnerabilities. You can expand an advisory column to open up CVE data. Click Packages to view the tag's packages. Click the name of the repository to return to the Tags page. 8.2. Viewing image tag information by using the API Use the following procedure to view image tag information by using the API Prerequisites You have pushed an image tag to a Red Hat Quay repository. You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure To obtain tag information, you must use the GET /api/v1/repository/{repository} API endpoint and pass in the includeTags parameter. For example: USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>?includeTags=true Example output {"namespace": "quayadmin", "name": "busybox", "kind": "image", "description": null, "is_public": false, "is_organization": false, "is_starred": false, "status_token": "d8f5e074-690a-46d7-83c8-8d4e3d3d0715", "trust_enabled": false, "tag_expiration_s": 1209600, "is_free_account": true, "state": "NORMAL", "tags": {"example": {"name": "example", "size": 2275314, "last_modified": "Tue, 14 May 2024 14:48:51 -0000", "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d"}, "test": {"name": "test", "size": 2275314, "last_modified": "Tue, 14 May 2024 14:04:48 -0000", "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d"}}, "can_write": true, "can_admin": true} Alternatively, you can use the GET /api/v1/repository/{repository}/tag/ endpoint. For example: USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/ Example output {"tags": [{"name": "test-two", "reversion": true, "start_ts": 1718737153, "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d", "is_manifest_list": false, "size": 2275314, "last_modified": "Tue, 18 Jun 2024 18:59:13 -0000"}, {"name": "test-two", "reversion": false, "start_ts": 1718737029, "end_ts": 1718737153, "manifest_digest": "sha256:0cd3dd6236e246b349e63f76ce5f150e7cd5dbf2f2f1f88dbd734430418dbaea", "is_manifest_list": false, "size": 2275317, "last_modified": "Tue, 18 Jun 2024 18:57:09 -0000", "expiration": "Tue, 18 Jun 2024 18:59:13 -0000"}, {"name": "test-two", "reversion": false, "start_ts": 1718737018, "end_ts": 1718737029, "manifest_digest": "sha256:0cd3dd6236e246b349e63f76ce5f150e7cd5dbf2f2f1f88dbd734430418dbaea", "is_manifest_list": false, "size": 2275317, "last_modified": "Tue, 18 Jun 2024 18:56:58 -0000", "expiration": "Tue, 18 Jun 2024 18:57:09 -0000"}, {"name": "sample_tag", "reversion": false, "start_ts": 1718736147, "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d", "is_manifest_list": false, "size": 2275314, "last_modified": "Tue, 18 Jun 2024 18:42:27 -0000"}, {"name": "test-two", "reversion": false, "start_ts": 1717680780, "end_ts": 1718737018, "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d", "is_manifest_list": false, "size": 2275314, "last_modified": "Thu, 06 Jun 2024 13:33:00 -0000", "expiration": "Tue, 18 Jun 2024 18:56:58 -0000"}, {"name": "tag-test", "reversion": false, "start_ts": 1717680378, "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d", "is_manifest_list": false, "size": 2275314, "last_modified": "Thu, 06 Jun 2024 13:26:18 -0000"}, {"name": "example", "reversion": false, "start_ts": 1715698131, "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d", "is_manifest_list": false, "size": 2275314, "last_modified": "Tue, 14 May 2024 14:48:51 -0000"}], "page": 1, "has_additional": false} 8.3. Adding a new image tag to an image by using the UI You can add a new tag to an image in Red Hat Quay. Procedure On the Red Hat Quay v2 UI dashboard, click Repositories in the navigation pane. Click the name of a repository that has image tags. Click the menu kebab, then click Add new tag . Enter a name for the tag, then, click Create tag . The new tag is now listed on the Repository Tags page. 8.4. Adding a new tag to an image tag to an image by using the API You can add a new tag, or restore an old one, to an image by using the API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure You can change which image a tag points to or create a new tag by using the PUT /api/v1/repository/{repository}/tag/{tag} command: USD curl -X PUT \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ --data '{ "manifest_digest": "<manifest_digest>" }' \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/<tag> Example output "Updated" You can restore a repository tag to its image by using the POST /api/v1/repository/{repository}/tag/{tag}/restore command. For example: USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ --data '{ "manifest_digest": <manifest_digest> }' \ quay-server.example.com/api/v1/repository/quayadmin/busybox/tag/test/restore Example output {} To see a list of tags after creating a new tag you can use the GET /api/v1/repository/{repository}/tag/ command. For example: USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag Example output {"tags": [{"name": "test", "reversion": false, "start_ts": 1716324069, "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d", "is_manifest_list": false, "size": 2275314, "last_modified": "Tue, 21 May 2024 20:41:09 -0000"}, {"name": "example", "reversion": false, "start_ts": 1715698131, "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d", "is_manifest_list": false, "size": 2275314, "last_modified": "Tue, 14 May 2024 14:48:51 -0000"}, {"name": "example", "reversion": false, "start_ts": 1715697708, "end_ts": 1715698131, "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d", "is_manifest_list": false, "size": 2275314, "last_modified": "Tue, 14 May 2024 14:41:48 -0000", "expiration": "Tue, 14 May 2024 14:48:51 -0000"}, {"name": "test", "reversion": false, "start_ts": 1715695488, "end_ts": 1716324069, "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d", "is_manifest_list": false, "size": 2275314, "last_modified": "Tue, 14 May 2024 14:04:48 -0000", "expiration": "Tue, 21 May 2024 20:41:09 -0000"}, {"name": "test", "reversion": false, "start_ts": 1715631517, "end_ts": 1715695488, "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d", "is_manifest_list": false, "size": 2275314, "last_modified": "Mon, 13 May 2024 20:18:37 -0000", "expiration": "Tue, 14 May 2024 14:04:48 -0000"}], "page": 1, "has_additional": false} 8.5. Adding and managing labels by using the UI Administrators can add and manage labels for tags by using the following procedure. Procedure On the v2 UI dashboard, click Repositories in the navigation pane. Click the name of a repository that has image tags. Click the menu kebab for an image and select Edit labels . In the Edit labels window, click Add new label . Enter a label for the image tag using the key=value format, for example, com.example.release-date=2023-11-14 . Note The following error is returned when failing to use the key=value format: Invalid label format, must be key value separated by = . Click the whitespace of the box to add the label. Optional. Add a second label. Click Save labels to save the label to the image tag. The following notification is returned: Created labels successfully . Optional. Click the same image tag's menu kebab Edit labels X on the label to remove it; alternatively, you can edit the text. Click Save labels . The label is now removed or edited. 8.6. Adding and managing labels by using the API Red Hat Quay administrators can add and manage labels for tags with the API by using the following procedure. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Use the GET /api/v1/repository/{repository}/manifest/{manifestref} command to retrieve the details of a specific manifest in a repository: USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref> Use the GET /api/v1/repository/{repository}/manifest/{manifestref}/labels command to retrieve a list of labels for a specific manifest: USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels Example output {"labels": [{"id": "e9f717d2-c1dd-4626-802d-733a029d17ad", "key": "org.opencontainers.image.url", "value": "https://github.com/docker-library/busybox", "source_type": "manifest", "media_type": "text/plain"}, {"id": "2d34ec64-4051-43ad-ae06-d5f81003576a", "key": "org.opencontainers.image.version", "value": "1.36.1-glibc", "source_type": "manifest", "media_type": "text/plain"}]} Use the GET /api/v1/repository/{repository}/manifest/{manifestref}/labels/{labelid} command to obtain information about a specific manifest: USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels/<label_id> Example output {"id": "e9f717d2-c1dd-4626-802d-733a029d17ad", "key": "org.opencontainers.image.url", "value": "https://github.com/docker-library/busybox", "source_type": "manifest", "media_type": "text/plain"} You can add an additional label to a manifest in a given repository with the POST /api/v1/repository/{repository}/manifest/{manifestref}/labels command. For example: USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ --data '{ "key": "<key>", "value": "<value>", "media_type": "<media_type>" }' \ https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels Example output {"label": {"id": "346593fd-18c8-49db-854f-4cb1fb76ff9c", "key": "example-key", "value": "example-value", "source_type": "api", "media_type": "text/plain"}} You can delete a label using the DELETE /api/v1/repository/{repository}/manifest/{manifestref}/labels/{labelid} command: USD curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels/<labelid> This command does not return output in the CLI. You can use one of the commands above to ensure that it was successfully removed. 8.7. Setting tag expirations Image tags can be set to expire from a Red Hat Quay repository at a chosen date and time using the tag expiration feature. This feature includes the following characteristics: When an image tag expires, it is deleted from the repository. If it is the last tag for a specific image, the image is also set to be deleted. Expiration is set on a per-tag basis. It is not set for a repository as a whole. After a tag is expired or deleted, it is not immediately removed from the registry. This is contingent upon the allotted time designed in the time machine feature, which defines when the tag is permanently deleted, or garbage collected. By default, this value is set at 14 days , however the administrator can adjust this time to one of multiple options. Up until the point that garbage collection occurs, tags changes can be reverted. The Red Hat Quay superuser has no special privilege related to deleting expired images from user repositories. There is no central mechanism for the superuser to gather information and act on user repositories. It is up to the owners of each repository to manage expiration and the deletion of their images. Tag expiration can be set up in one of two ways: By setting the quay.expires-after= label in the Dockerfile when the image is created. This sets a time to expire from when the image is built. By selecting an expiration date on the Red Hat Quay UI. For example: Setting tag expirations can help automate the cleanup of older or unused tags, helping to reduce storage space. 8.7.1. Setting tag expiration from a repository Procedure On the Red Hat Quay v2 UI dashboard, click Repositories in the navigation pane. Click the name of a repository that has image tags. Click the menu kebab for an image and select Change expiration . Optional. Alternatively, you can bulk add expiration dates by clicking the box of multiple tags, and then select Actions Set expiration . In the Change Tags Expiration window, set an expiration date, specifying the day of the week, month, day of the month, and year. For example, Wednesday, November 15, 2023 . Alternatively, you can click the calendar button and manually select the date. Set the time, for example, 2:30 PM . Click Change Expiration to confirm the date and time. The following notification is returned: Successfully set expiration for tag test to Nov 15, 2023, 2:26 PM . On the Red Hat Quay v2 UI Tags page, you can see when the tag is set to expire. For example: 8.7.2. Setting tag expiration from a Dockerfile You can add a label, for example, quay.expires-after=20h to an image tag by using the docker label command to cause the tag to automatically expire after the time that is indicated. The following values for hours, days, or weeks are accepted: 1h 2d 3w Expiration begins from the time that the image is pushed to the registry. Procedure Enter the following docker label command to add a label to the desired image tag. The label should be in the format quay.expires-after=20h to indicate that the tag should expire after 20 hours. Replace 20h with the desired expiration time. For example: USD docker label quay.expires-after=20h quay-server.example.com/quayadmin/<image>:<tag> 8.7.3. Setting tag expirations by using the API Image tags can be set to expire by using the API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure You can set when an image a tag expires by using the PUT /api/v1/repository/{repository}/tag/{tag} command and passing in the expiration field: USD curl -X PUT \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ --data '{ "manifest_digest": "<manifest_digest>" }' \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/<tag> Example output "Updated" 8.8. Fetching an image by tag or digest Red Hat Quay offers multiple ways of pulling images using Docker and Podman clients. Procedure Navigate to the Tags page of a repository. Under Manifest , click the Fetch Tag icon. When the popup box appears, users are presented with the following options: Podman Pull (by tag) Docker Pull (by tag) Podman Pull (by digest) Docker Pull (by digest) Selecting any one of the four options returns a command for the respective client that allows users to pull the image. Click Copy Command to copy the command, which can be used on the command-line interface (CLI). For example: USD podman pull quay-server.example.com/quayadmin/busybox:test2 8.9. Viewing Red Hat Quay tag history by using the UI Red Hat Quay offers a comprehensive history of images and their respective image tags. Procedure On the Red Hat Quay v2 UI dashboard, click Repositories in the navigation pane. Click the name of a repository that has image tags. Click Tag History . On this page, you can perform the following actions: Search by tag name Select a date range View tag changes View tag modification dates and the time at which they were changed 8.10. Viewing Red Hat Quay tag history by using the API Red Hat Quay offers a comprehensive history of images and their respective image tags. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following command to view tag history by using the GET /api/v1/repository/{repository}/tag/ command and passing in one of the following queries: onlyActiveTags=<true/false> : Filters to only include active tags. page=<number> : Specifies the page number of results to retrieve. limit=<number> : Limits the number of results per page. specificTag=<tag_name> : Filters the tags to include only the tag with the specified name. USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ "https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/tag/?onlyActiveTags=true&page=1&limit=10" Example output {"tags": [{"name": "test-two", "reversion": false, "start_ts": 1717680780, "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d", "is_manifest_list": false, "size": 2275314, "last_modified": "Thu, 06 Jun 2024 13:33:00 -0000"}, {"name": "tag-test", "reversion": false, "start_ts": 1717680378, "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d", "is_manifest_list": false, "size": 2275314, "last_modified": "Thu, 06 Jun 2024 13:26:18 -0000"}, {"name": "example", "reversion": false, "start_ts": 1715698131, "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d", "is_manifest_list": false, "size": 2275314, "last_modified": "Tue, 14 May 2024 14:48:51 -0000"}], "page": 1, "has_additional": false} By using the specificTag=<tag_name> query, you can filter results for a specific tag. For example: USD curl -X GET -H "Authorization: Bearer <bearer_token>" -H "Accept: application/json" "<quay-server.example.com>/api/v1/repository/quayadmin/busybox/tag/?onlyActiveTags=true&page=1&limit=20&specificTag=test-two" Example output {"tags": [{"name": "test-two", "reversion": true, "start_ts": 1718737153, "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d", "is_manifest_list": false, "size": 2275314, "last_modified": "Tue, 18 Jun 2024 18:59:13 -0000"}], "page": 1, "has_additional": false} 8.11. Deleting an image tag Deleting an image tag removes that specific version of the image from the registry. To delete an image tag, use the following procedure. Procedure On the Repositories page of the v2 UI, click the name of the image you want to delete, for example, quay/admin/busybox . Click the More Actions drop-down menu. Click Delete . Note If desired, you could click Make Public or Make Private . Type confirm in the box, and then click Delete . After deletion, you are returned to the Repositories page. Note Deleting an image tag can be reverted based on the amount of time allotted assigned to the time machine feature. For more information, see "Reverting tag changes". 8.12. Deleting an image by using the API You can delete an old image tag by using the API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure You can delete an image tag by using the DELETE /api/v1/repository/{repository}/tag/{tag} command: USD curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/<tag> This command does not return output in the CLI. Continue on to the step to return a list of tags. To see a list of tags after deleting a tag, you can use the GET /api/v1/repository/{repository}/tag/ command. For example: USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag Example output {"tags": [{"name": "test", "reversion": false, "start_ts": 1716324069, "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d", "is_manifest_list": false, "size": 2275314, "last_modified": "Tue, 21 May 2024 20:41:09 -0000"}, {"name": "example", "reversion": false, "start_ts": 1715698131, "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d", "is_manifest_list": false, "size": 2275314, "last_modified": "Tue, 14 May 2024 14:48:51 -0000"}, {"name": "example", "reversion": false, "start_ts": 1715697708, "end_ts": 1715698131, "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d", "is_manifest_list": false, "size": 2275314, "last_modified": "Tue, 14 May 2024 14:41:48 -0000", "expiration": "Tue, 14 May 2024 14:48:51 -0000"}, {"name": "test", "reversion": false, "start_ts": 1715695488, "end_ts": 1716324069, "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d", "is_manifest_list": false, "size": 2275314, "last_modified": "Tue, 14 May 2024 14:04:48 -0000", "expiration": "Tue, 21 May 2024 20:41:09 -0000"}, {"name": "test", "reversion": false, "start_ts": 1715631517, "end_ts": 1715695488, "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d", "is_manifest_list": false, "size": 2275314, "last_modified": "Mon, 13 May 2024 20:18:37 -0000", "expiration": "Tue, 14 May 2024 14:04:48 -0000"}], "page": 1, "has_additional": false} 8.13. Reverting tag changes by using the UI Red Hat Quay offers a comprehensive time machine feature that allows older images tags to remain in the repository for set periods of time so that they can revert changes made to tags. This feature allows users to revert tag changes, like tag deletions. Procedure On the Repositories page of the v2 UI, click the name of the image you want to revert. Click the Tag History tab. Find the point in the timeline at which image tags were changed or removed. , click the option under Revert to restore a tag to its image. 8.14. Reverting tag changes by using the API Red Hat Quay offers a comprehensive time machine feature that allows older images tags to remain in the repository for set periods of time so that they can revert changes made to tags. This feature allows users to revert tag changes, like tag deletions. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure You can restore a repository tag to its image by using the POST /api/v1/repository/{repository}/tag/{tag}/restore command. For example: USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ --data '{ "manifest_digest": <manifest_digest> }' \ quay-server.example.com/api/v1/repository/quayadmin/busybox/tag/test/restore Example output {} To see a list of tags after restoring an old tag you can use the GET /api/v1/repository/{repository}/tag/ command. For example: USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag Example output {"tags": [{"name": "test", "reversion": false, "start_ts": 1716324069, "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d", "is_manifest_list": false, "size": 2275314, "last_modified": "Tue, 21 May 2024 20:41:09 -0000"}, {"name": "example", "reversion": false, "start_ts": 1715698131, "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d", "is_manifest_list": false, "size": 2275314, "last_modified": "Tue, 14 May 2024 14:48:51 -0000"}, {"name": "example", "reversion": false, "start_ts": 1715697708, "end_ts": 1715698131, "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d", "is_manifest_list": false, "size": 2275314, "last_modified": "Tue, 14 May 2024 14:41:48 -0000", "expiration": "Tue, 14 May 2024 14:48:51 -0000"}, {"name": "test", "reversion": false, "start_ts": 1715695488, "end_ts": 1716324069, "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d", "is_manifest_list": false, "size": 2275314, "last_modified": "Tue, 14 May 2024 14:04:48 -0000", "expiration": "Tue, 21 May 2024 20:41:09 -0000"}, {"name": "test", "reversion": false, "start_ts": 1715631517, "end_ts": 1715695488, "manifest_digest": "sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d", "is_manifest_list": false, "size": 2275314, "last_modified": "Mon, 13 May 2024 20:18:37 -0000", "expiration": "Tue, 14 May 2024 14:04:48 -0000"}], "page": 1, "has_additional": false}
|
[
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>?includeTags=true",
"{\"namespace\": \"quayadmin\", \"name\": \"busybox\", \"kind\": \"image\", \"description\": null, \"is_public\": false, \"is_organization\": false, \"is_starred\": false, \"status_token\": \"d8f5e074-690a-46d7-83c8-8d4e3d3d0715\", \"trust_enabled\": false, \"tag_expiration_s\": 1209600, \"is_free_account\": true, \"state\": \"NORMAL\", \"tags\": {\"example\": {\"name\": \"example\", \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:48:51 -0000\", \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\"}, \"test\": {\"name\": \"test\", \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:04:48 -0000\", \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\"}}, \"can_write\": true, \"can_admin\": true}",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/",
"{\"tags\": [{\"name\": \"test-two\", \"reversion\": true, \"start_ts\": 1718737153, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 18 Jun 2024 18:59:13 -0000\"}, {\"name\": \"test-two\", \"reversion\": false, \"start_ts\": 1718737029, \"end_ts\": 1718737153, \"manifest_digest\": \"sha256:0cd3dd6236e246b349e63f76ce5f150e7cd5dbf2f2f1f88dbd734430418dbaea\", \"is_manifest_list\": false, \"size\": 2275317, \"last_modified\": \"Tue, 18 Jun 2024 18:57:09 -0000\", \"expiration\": \"Tue, 18 Jun 2024 18:59:13 -0000\"}, {\"name\": \"test-two\", \"reversion\": false, \"start_ts\": 1718737018, \"end_ts\": 1718737029, \"manifest_digest\": \"sha256:0cd3dd6236e246b349e63f76ce5f150e7cd5dbf2f2f1f88dbd734430418dbaea\", \"is_manifest_list\": false, \"size\": 2275317, \"last_modified\": \"Tue, 18 Jun 2024 18:56:58 -0000\", \"expiration\": \"Tue, 18 Jun 2024 18:57:09 -0000\"}, {\"name\": \"sample_tag\", \"reversion\": false, \"start_ts\": 1718736147, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 18 Jun 2024 18:42:27 -0000\"}, {\"name\": \"test-two\", \"reversion\": false, \"start_ts\": 1717680780, \"end_ts\": 1718737018, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Thu, 06 Jun 2024 13:33:00 -0000\", \"expiration\": \"Tue, 18 Jun 2024 18:56:58 -0000\"}, {\"name\": \"tag-test\", \"reversion\": false, \"start_ts\": 1717680378, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Thu, 06 Jun 2024 13:26:18 -0000\"}, {\"name\": \"example\", \"reversion\": false, \"start_ts\": 1715698131, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:48:51 -0000\"}], \"page\": 1, \"has_additional\": false}",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"manifest_digest\": \"<manifest_digest>\" }' https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/<tag>",
"\"Updated\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"manifest_digest\": <manifest_digest> }' quay-server.example.com/api/v1/repository/quayadmin/busybox/tag/test/restore",
"{}",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag",
"{\"tags\": [{\"name\": \"test\", \"reversion\": false, \"start_ts\": 1716324069, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 21 May 2024 20:41:09 -0000\"}, {\"name\": \"example\", \"reversion\": false, \"start_ts\": 1715698131, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:48:51 -0000\"}, {\"name\": \"example\", \"reversion\": false, \"start_ts\": 1715697708, \"end_ts\": 1715698131, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:41:48 -0000\", \"expiration\": \"Tue, 14 May 2024 14:48:51 -0000\"}, {\"name\": \"test\", \"reversion\": false, \"start_ts\": 1715695488, \"end_ts\": 1716324069, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:04:48 -0000\", \"expiration\": \"Tue, 21 May 2024 20:41:09 -0000\"}, {\"name\": \"test\", \"reversion\": false, \"start_ts\": 1715631517, \"end_ts\": 1715695488, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Mon, 13 May 2024 20:18:37 -0000\", \"expiration\": \"Tue, 14 May 2024 14:04:48 -0000\"}], \"page\": 1, \"has_additional\": false}",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels",
"{\"labels\": [{\"id\": \"e9f717d2-c1dd-4626-802d-733a029d17ad\", \"key\": \"org.opencontainers.image.url\", \"value\": \"https://github.com/docker-library/busybox\", \"source_type\": \"manifest\", \"media_type\": \"text/plain\"}, {\"id\": \"2d34ec64-4051-43ad-ae06-d5f81003576a\", \"key\": \"org.opencontainers.image.version\", \"value\": \"1.36.1-glibc\", \"source_type\": \"manifest\", \"media_type\": \"text/plain\"}]}",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels/<label_id>",
"{\"id\": \"e9f717d2-c1dd-4626-802d-733a029d17ad\", \"key\": \"org.opencontainers.image.url\", \"value\": \"https://github.com/docker-library/busybox\", \"source_type\": \"manifest\", \"media_type\": \"text/plain\"}",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"key\": \"<key>\", \"value\": \"<value>\", \"media_type\": \"<media_type>\" }' https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels",
"{\"label\": {\"id\": \"346593fd-18c8-49db-854f-4cb1fb76ff9c\", \"key\": \"example-key\", \"value\": \"example-value\", \"source_type\": \"api\", \"media_type\": \"text/plain\"}}",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels/<labelid>",
"docker label quay.expires-after=20h quay-server.example.com/quayadmin/<image>:<tag>",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"manifest_digest\": \"<manifest_digest>\" }' https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/<tag>",
"\"Updated\"",
"podman pull quay-server.example.com/quayadmin/busybox:test2",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/tag/?onlyActiveTags=true&page=1&limit=10\"",
"{\"tags\": [{\"name\": \"test-two\", \"reversion\": false, \"start_ts\": 1717680780, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Thu, 06 Jun 2024 13:33:00 -0000\"}, {\"name\": \"tag-test\", \"reversion\": false, \"start_ts\": 1717680378, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Thu, 06 Jun 2024 13:26:18 -0000\"}, {\"name\": \"example\", \"reversion\": false, \"start_ts\": 1715698131, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:48:51 -0000\"}], \"page\": 1, \"has_additional\": false}",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/repository/quayadmin/busybox/tag/?onlyActiveTags=true&page=1&limit=20&specificTag=test-two\"",
"{\"tags\": [{\"name\": \"test-two\", \"reversion\": true, \"start_ts\": 1718737153, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 18 Jun 2024 18:59:13 -0000\"}], \"page\": 1, \"has_additional\": false}",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/<tag>",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag",
"{\"tags\": [{\"name\": \"test\", \"reversion\": false, \"start_ts\": 1716324069, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 21 May 2024 20:41:09 -0000\"}, {\"name\": \"example\", \"reversion\": false, \"start_ts\": 1715698131, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:48:51 -0000\"}, {\"name\": \"example\", \"reversion\": false, \"start_ts\": 1715697708, \"end_ts\": 1715698131, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:41:48 -0000\", \"expiration\": \"Tue, 14 May 2024 14:48:51 -0000\"}, {\"name\": \"test\", \"reversion\": false, \"start_ts\": 1715695488, \"end_ts\": 1716324069, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:04:48 -0000\", \"expiration\": \"Tue, 21 May 2024 20:41:09 -0000\"}, {\"name\": \"test\", \"reversion\": false, \"start_ts\": 1715631517, \"end_ts\": 1715695488, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Mon, 13 May 2024 20:18:37 -0000\", \"expiration\": \"Tue, 14 May 2024 14:04:48 -0000\"}], \"page\": 1, \"has_additional\": false}",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"manifest_digest\": <manifest_digest> }' quay-server.example.com/api/v1/repository/quayadmin/busybox/tag/test/restore",
"{}",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag",
"{\"tags\": [{\"name\": \"test\", \"reversion\": false, \"start_ts\": 1716324069, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 21 May 2024 20:41:09 -0000\"}, {\"name\": \"example\", \"reversion\": false, \"start_ts\": 1715698131, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:48:51 -0000\"}, {\"name\": \"example\", \"reversion\": false, \"start_ts\": 1715697708, \"end_ts\": 1715698131, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:41:48 -0000\", \"expiration\": \"Tue, 14 May 2024 14:48:51 -0000\"}, {\"name\": \"test\", \"reversion\": false, \"start_ts\": 1715695488, \"end_ts\": 1716324069, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:04:48 -0000\", \"expiration\": \"Tue, 21 May 2024 20:41:09 -0000\"}, {\"name\": \"test\", \"reversion\": false, \"start_ts\": 1715631517, \"end_ts\": 1715695488, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Mon, 13 May 2024 20:18:37 -0000\", \"expiration\": \"Tue, 14 May 2024 14:04:48 -0000\"}], \"page\": 1, \"has_additional\": false}"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/use_red_hat_quay/image-tags-overview
|
17.2. Managing Disk Quotas
|
17.2. Managing Disk Quotas If quotas are implemented, they need some maintenance mostly in the form of watching to see if the quotas are exceeded and making sure the quotas are accurate. If users repeatedly exceed their quotas or consistently reach their soft limits, a system administrator has a few choices to make depending on what type of users they are and how much disk space impacts their work. The administrator can either help the user determine how to use less disk space or increase the user's disk quota. 17.2.1. Enabling and Disabling It is possible to disable quotas without setting them to 0. To turn all user and group quotas off, use the following command: If neither the -u or -g options are specified, only the user quotas are disabled. If only -g is specified, only group quotas are disabled. The -v switch causes verbose status information to display as the command executes. To enable user and group quotas again, use the following command: To enable user and group quotas for all file systems, use the following command: If neither the -u or -g options are specified, only the user quotas are enabled. If only -g is specified, only group quotas are enabled. To enable quotas for a specific file system, such as /home , use the following command: Note The quotaon command is not always needed for XFS because it is performed automatically at mount time. Refer to the man page quotaon(8) for more information. 17.2.2. Reporting on Disk Quotas Creating a disk usage report entails running the repquota utility. Example 17.6. Output of the repquota Command For example, the command repquota /home produces this output: To view the disk usage report for all (option -a ) quota-enabled file systems, use the command: While the report is easy to read, a few points should be explained. The -- displayed after each user is a quick way to determine whether the block or inode limits have been exceeded. If either soft limit is exceeded, a + appears in place of the corresponding - ; the first - represents the block limit, and the second represents the inode limit. The grace columns are normally blank. If a soft limit has been exceeded, the column contains a time specification equal to the amount of time remaining on the grace period. If the grace period has expired, none appears in its place. 17.2.3. Keeping Quotas Accurate When a file system fails to unmount cleanly, for example due to a system crash, it is necessary to run the following command: However, quotacheck can be run on a regular basis, even if the system has not crashed. Safe methods for periodically running quotacheck include: Ensuring quotacheck runs on reboot Note This method works best for (busy) multiuser systems which are periodically rebooted. Save a shell script into the /etc/cron.daily/ or /etc/cron.weekly/ directory or schedule one using the following command: The crontab -e command contains the touch /forcequotacheck command. This creates an empty forcequotacheck file in the root directory, which the system init script looks for at boot time. If it is found, the init script runs quotacheck . Afterward, the init script removes the /forcequotacheck file; thus, scheduling this file to be created periodically with cron ensures that quotacheck is run during the reboot. For more information about cron , see man cron . Running quotacheck in single user mode An alternative way to safely run quotacheck is to boot the system into single-user mode to prevent the possibility of data corruption in quota files and run the following commands: Running quotacheck on a running system If necessary, it is possible to run quotacheck on a machine during a time when no users are logged in, and thus have no open files on the file system being checked. Run the command quotacheck -vug file_system ; this command will fail if quotacheck cannot remount the given file_system as read-only. Note that, following the check, the file system will be remounted read-write. Warning Running quotacheck on a live file system mounted read-write is not recommended due to the possibility of quota file corruption. See man cron for more information about configuring cron .
|
[
"quotaoff -vaug",
"quotaon",
"quotaon -vaug",
"quotaon -vug /home",
"*** Report for user quotas on device /dev/mapper/VolGroup00-LogVol02 Block grace time: 7days; Inode grace time: 7days Block limits File limits User used soft hard grace used soft hard grace ---------------------------------------------------------------------- root -- 36 0 0 4 0 0 kristin -- 540 0 0 125 0 0 testuser -- 440400 500000 550000 37418 0 0",
"repquota -a",
"quotacheck",
"crontab -e",
"quotaoff -vug / file_system",
"quotacheck -vug / file_system",
"quotaon -vug / file_system"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/s1-disk-quotas-managing
|
Chapter 10. Node maintenance
|
Chapter 10. Node maintenance 10.1. About node maintenance 10.1.1. Understanding node maintenance mode Nodes can be placed into maintenance mode using the oc adm utility, or using NodeMaintenance custom resources (CRs). Placing a node into maintenance marks the node as unschedulable and drains all the virtual machines and pods from it. Virtual machine instances that have a LiveMigrate eviction strategy are live migrated to another node without loss of service. This eviction strategy is configured by default in virtual machine created from common templates but must be configured manually for custom virtual machines. Virtual machine instances without an eviction strategy are shut down. Virtual machines with a RunStrategy of Running or RerunOnFailure are recreated on another node. Virtual machines with a RunStrategy of Manual are not automatically restarted. Important Virtual machines must have a persistent volume claim (PVC) with a shared ReadWriteMany (RWX) access mode to be live migrated. When installed as part of OpenShift Virtualization, Node Maintenance Operator watches for new or deleted NodeMaintenance CRs. When a new NodeMaintenance CR is detected, no new workloads are scheduled and the node is cordoned off from the rest of the cluster. All pods that can be evicted are evicted from the node. When a NodeMaintenance CR is deleted, the node that is referenced in the CR is made available for new workloads. Note Using a NodeMaintenance CR for node maintenance tasks achieves the same results as the oc adm cordon and oc adm drain commands using standard OpenShift Container Platform custom resource processing. 10.1.2. Maintaining bare metal nodes When you deploy OpenShift Container Platform on bare metal infrastructure, there are additional considerations that must be taken into account compared to deploying on cloud infrastructure. Unlike in cloud environments where the cluster nodes are considered ephemeral, re-provisioning a bare metal node requires significantly more time and effort for maintenance tasks. When a bare metal node fails, for example, if a fatal kernel error happens or a NIC card hardware failure occurs, workloads on the failed node need to be restarted elsewhere else on the cluster while the problem node is repaired or replaced. Node maintenance mode allows cluster administrators to gracefully power down nodes, moving workloads to other parts of the cluster and ensuring workloads do not get interrupted. Detailed progress and node status details are provided during maintenance. Additional resources: About RunStrategies for virtual machines Virtual machine live migration Configuring virtual machine eviction strategy 10.2. Setting a node to maintenance mode Place a node into maintenance from the web console, CLI, or using a NodeMaintenance custom resource. 10.2.1. Setting a node to maintenance mode in the web console Set a node to maintenance mode using the Options menu found on each node in the Compute Nodes list, or using the Actions control of the Node Details screen. Procedure In the OpenShift Virtualization console, click Compute Nodes . You can set the node to maintenance from this screen, which makes it easier to perform actions on multiple nodes in the one screen or from the Node Details screen where you can view comprehensive details of the selected node: Click the Options menu at the end of the node and select Start Maintenance . Click the node name to open the Node Details screen and click Actions Start Maintenance . Click Start Maintenance in the confirmation window. The node will live migrate virtual machine instances that have the LiveMigration eviction strategy, and the node is no longer schedulable. All other pods and virtual machines on the node are deleted and recreated on another node. 10.2.2. Setting a node to maintenance mode in the CLI Set a node to maintenance mode by marking it as unschedulable and using the oc adm drain command to evict or delete pods from the node. Procedure Mark the node as unschedulable. The node status changes to NotReady,SchedulingDisabled . USD oc adm cordon <node1> Drain the node in preparation for maintenance. The node live migrates virtual machine instances that have the LiveMigratable condition set to True and the spec:evictionStrategy field set to LiveMigrate . All other pods and virtual machines on the node are deleted and recreated on another node. USD oc adm drain <node1> --delete-emptydir-data --ignore-daemonsets=true --force The --delete-emptydir-data flag removes any virtual machine instances on the node that use emptyDir volumes. Data in these volumes is ephemeral and is safe to be deleted after termination. The --ignore-daemonsets=true flag ensures that daemon sets are ignored and pod eviction can continue successfully. The --force flag is required to delete pods that are not managed by a replica set or daemon set controller. 10.2.3. Setting a node to maintenance mode with a NodeMaintenance custom resource You can put a node into maintenance mode with a NodeMaintenance custom resource (CR). When you apply a NodeMaintenance CR, all allowed pods are evicted and the node is shut down. Evicted pods are queued to be moved to another node in the cluster. Prerequisites Install the OpenShift Container Platform CLI oc . Log in to the cluster as a user with cluster-admin privileges. Procedure Create the following node maintenance CR, and save the file as nodemaintenance-cr.yaml : apiVersion: nodemaintenance.kubevirt.io/v1beta1 kind: NodeMaintenance metadata: name: maintenance-example 1 spec: nodeName: node-1.example.com 2 reason: "Node maintenance" 3 1 Node maintenance CR name 2 The name of the node to be put into maintenance mode 3 Plain text description of the reason for maintenance Apply the node maintenance schedule by running the following command: USD oc apply -f nodemaintenance-cr.yaml Check the progress of the maintenance task by running the following command, replacing <node-name> with the name of your node: USD oc describe node <node-name> Example output Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeNotSchedulable 61m kubelet Node node-1.example.com status is now: NodeNotSchedulable 10.2.3.1. Checking status of current NodeMaintenance CR tasks You can check the status of current NodeMaintenance CR tasks. Prerequisites Install the OpenShift Container Platform CLI oc . Log in as a user with cluster-admin privileges. Procedure Check the status of current node maintenance tasks by running the following command: USD oc get NodeMaintenance -o yaml Example output apiVersion: v1 items: - apiVersion: nodemaintenance.kubevirt.io/v1beta1 kind: NodeMaintenance metadata: ... spec: nodeName: node-1.example.com reason: Node maintenance status: evictionPods: 3 1 pendingPods: - pod-example-workload-0 - httpd - httpd-manual phase: Running lastError: "Last failure message" 2 totalpods: 5 ... 1 evictionPods is the number of pods scheduled for eviction. 2 lastError records the latest eviction error, if any. Additional resources: Resuming a node from maintenance mode Virtual machine live migration Configuring virtual machine eviction strategy 10.3. Resuming a node from maintenance mode Resuming a node brings it out of maintenance mode and makes it schedulable again. Resume a node from maintenance mode from the web console, CLI, or by deleting the NodeMaintenance custom resource. 10.3.1. Resuming a node from maintenance mode in the web console Resume a node from maintenance mode using the Options menu found on each node in the Compute Nodes list, or using the Actions control of the Node Details screen. Procedure In the OpenShift Virtualization console, click Compute Nodes . You can resume the node from this screen, which makes it easier to perform actions on multiple nodes in the one screen, or from the Node Details screen where you can view comprehensive details of the selected node: Click the Options menu at the end of the node and select Stop Maintenance . Click the node name to open the Node Details screen and click Actions Stop Maintenance . Click Stop Maintenance in the confirmation window. The node becomes schedulable, but virtual machine instances that were running on the node prior to maintenance will not automatically migrate back to this node. 10.3.2. Resuming a node from maintenance mode in the CLI Resume a node from maintenance mode by making it schedulable again. Procedure Mark the node as schedulable. You can then resume scheduling new workloads on the node. USD oc adm uncordon <node1> 10.3.3. Resuming a node from maintenance mode that was initiated with a NodeMaintenance CR You can resume a node by deleting the NodeMaintenance CR. Prerequisites Install the OpenShift Container Platform CLI oc . Log in to the cluster as a user with cluster-admin privileges. Procedure When your node maintenance task is complete, delete the active NodeMaintenance CR: USD oc delete -f nodemaintenance-cr.yaml Example output nodemaintenance.nodemaintenance.kubevirt.io "maintenance-example" deleted 10.4. Automatic renewal of TLS certificates All TLS certificates for OpenShift Virtualization components are renewed and rotated automatically. You are not required to refresh them manually. 10.4.1. TLS certificates automatic renewal schedules TLS certificates are automatically deleted and replaced according to the following schedule: KubeVirt certificates are renewed daily. Containerized Data Importer controller (CDI) certificates are renewed every 15 days. MAC pool certificates are renewed every year. Automatic TLS certificate rotation does not disrupt any operations. For example, the following operations continue to function without any disruption: Migrations Image uploads VNC and console connections 10.5. Managing node labeling for obsolete CPU models You can schedule a virtual machine (VM) on a node where the CPU model and policy attribute of the VM are compatible with the CPU models and policy attributes that the node supports. By specifying a list of obsolete CPU models in a config map , you can exclude them from the list of labels created for CPU models. 10.5.1. Understanding node labeling for obsolete CPU models To ensure that a node supports only valid CPU models for scheduled VMs, create a config map with a list of obsolete CPU models. When the node-labeller obtains the list of obsolete CPU models, it eliminates those CPU models and creates labels for valid CPU models. Note If you do not configure a config map with a list of obsolete CPU models, all CPU models are evaluated for labels, including obsolete CPU models that are not present in your environment. Through the process of iteration, the list of base CPU features in the minimum CPU model are eliminated from the list of labels generated for the node. For example, an environment might have two supported CPU models: Penryn and Haswell . If Penryn is specified as the CPU model for minCPU , the node-labeller evaluates each base CPU feature for Penryn and compares it with each CPU feature supported by Haswell . If the CPU feature is supported by both Penryn and Haswell , the node-labeller eliminates that feature from the list of CPU features for creating labels. If a CPU feature is supported only by Haswell and not by Penryn , that CPU feature is included in the list of generated labels. The node-labeller follows this iterative process to eliminate base CPU features that are present in the minimum CPU model and create labels. The following example shows the complete list of CPU features for Penryn which is specified as the CPU model for minCPU : Example of CPU features for Penryn The following example shows the complete list of CPU features for Haswell : Example of CPU features for Haswell The following example shows the list of node labels generated by the node-labeller after iterating and comparing the CPU features for Penryn with the CPU features for Haswell : Example of node labels after iteration 10.5.2. Configuring a config map for obsolete CPU models Use this procedure to configure a config map for obsolete CPU models. Procedure Create a ConfigMap object, specifying the obsolete CPU models in the obsoleteCPUs array. For example: apiVersion: v1 kind: ConfigMap metadata: name: cpu-plugin-configmap 1 data: 2 cpu-plugin-configmap: obsoleteCPUs: 3 - "486" - "pentium" - "pentium2" - "pentium3" - "pentiumpro" minCPU: "Penryn" 4 1 Name of the config map. 2 Configuration data. 3 List of obsolete CPU models. 4 Minimum CPU model that is used for basic CPU features.
|
[
"oc adm cordon <node1>",
"oc adm drain <node1> --delete-emptydir-data --ignore-daemonsets=true --force",
"apiVersion: nodemaintenance.kubevirt.io/v1beta1 kind: NodeMaintenance metadata: name: maintenance-example 1 spec: nodeName: node-1.example.com 2 reason: \"Node maintenance\" 3",
"oc apply -f nodemaintenance-cr.yaml",
"oc describe node <node-name>",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeNotSchedulable 61m kubelet Node node-1.example.com status is now: NodeNotSchedulable",
"oc get NodeMaintenance -o yaml",
"apiVersion: v1 items: - apiVersion: nodemaintenance.kubevirt.io/v1beta1 kind: NodeMaintenance metadata: spec: nodeName: node-1.example.com reason: Node maintenance status: evictionPods: 3 1 pendingPods: - pod-example-workload-0 - httpd - httpd-manual phase: Running lastError: \"Last failure message\" 2 totalpods: 5",
"oc adm uncordon <node1>",
"oc delete -f nodemaintenance-cr.yaml",
"nodemaintenance.nodemaintenance.kubevirt.io \"maintenance-example\" deleted",
"apic clflush cmov cx16 cx8 de fpu fxsr lahf_lm lm mca mce mmx msr mtrr nx pae pat pge pni pse pse36 sep sse sse2 sse4.1 ssse3 syscall tsc",
"aes apic avx avx2 bmi1 bmi2 clflush cmov cx16 cx8 de erms fma fpu fsgsbase fxsr hle invpcid lahf_lm lm mca mce mmx movbe msr mtrr nx pae pat pcid pclmuldq pge pni popcnt pse pse36 rdtscp rtm sep smep sse sse2 sse4.1 sse4.2 ssse3 syscall tsc tsc-deadline x2apic xsave",
"aes avx avx2 bmi1 bmi2 erms fma fsgsbase hle invpcid movbe pcid pclmuldq popcnt rdtscp rtm sse4.2 tsc-deadline x2apic xsave",
"apiVersion: v1 kind: ConfigMap metadata: name: cpu-plugin-configmap 1 data: 2 cpu-plugin-configmap: obsoleteCPUs: 3 - \"486\" - \"pentium\" - \"pentium2\" - \"pentium3\" - \"pentiumpro\" minCPU: \"Penryn\" 4"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/openshift_virtualization/node-maintenance
|
Chapter 10. Testing
|
Chapter 10. Testing As a storage administrator, you can do basic functionality testing to verify that the Ceph Object Gateway environment is working as expected. You can use the REST interfaces by creating an initial Ceph Object Gateway user for the S3 interface, and then create a subuser for the Swift interface. Prerequisites A healthy running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway software. 10.1. Create an S3 user To test the gateway, create an S3 user and grant the user access. The man radosgw-admin command provides information on additional command options. Note In a multi-site deployment, always create a user on a host in the master zone of the master zone group. Prerequisites root or sudo access Ceph Object Gateway installed Procedure Create an S3 user: Syntax Replace name with the name of the S3 user: Example Verify the output to ensure that the values of access_key and secret_key do not include a JSON escape character ( \ ). These values are needed for access validation, but certain clients cannot handle if the values include JSON escape characters. To fix this problem, perform one of the following actions: Remove the JSON escape character. Encapsulate the string in quotes. Regenerate the key and ensure that it does not include a JSON escape character. Specify the key and secret manually. Do not remove the forward slash / because it is a valid character. 10.2. Create a Swift user To test the Swift interface, create a Swift subuser. Creating a Swift user is a two-step process. The first step is to create the user. The second step is to create the secret key. Note In a multi-site deployment, always create a user on a host in the master zone of the master zone group. Prerequisites Installation of the Ceph Object Gateway. Root-level access to the Ceph Object Gateway node. Procedure Create the Swift user: Syntax Replace NAME with the Swift user name, for example: Example Create the secret key: Syntax Replace NAME with the Swift user name, for example: Example 10.3. Test S3 access You need to write and run a Python test script for verifying S3 access. The S3 access test script will connect to the radosgw , create a new bucket, and list all buckets. The values for aws_access_key_id and aws_secret_access_key are taken from the values of access_key and secret_key returned by the radosgw_admin command. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the nodes. Procedure Enable the High Availability repository for Red Hat Enterprise Linux 9: Install the python3-boto3 package: Create the Python script: Add the following contents to the file: Syntax Replace endpoint with the URL of the host where you have configured the gateway service. That is, the gateway host . Ensure that the host setting resolves with DNS. Replace PORT with the port number of the gateway. Replace ACCESS and SECRET with the access_key and secret_key values from the Create an S3 User section in the Red Hat Ceph Storage Object Gateway Guide . Run the script: The output will be something like the following: 10.4. Test Swift access Swift access can be verified via the swift command line client. The command man swift will provide more information on available command line options. To install the swift client, run the following command: To test swift access, run the following command: Syntax Replace IP_ADDRESS with the public IP address of the gateway server and SWIFT_SECRET_KEY with its value from the output of the radosgw-admin key create command issued for the swift user. Replace PORT with the port number you are using with Beast. If you do not replace the port, it will default to port 80 . For example: The output should be:
|
[
"radosgw-admin user create --uid= name --display-name=\" USER_NAME \"",
"radosgw-admin user create --uid=\"testuser\" --display-name=\"Jane Doe\" { \"user_id\": \"testuser\", \"display_name\": \"Jane Doe\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [], \"keys\": [ { \"user\": \"testuser\", \"access_key\": \"CEP28KDIQXBKU4M15PDC\", \"secret_key\": \"MARoio8HFc8JxhEilES3dKFVj8tV3NOOYymihTLO\" } ], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }",
"radosgw-admin subuser create --uid= NAME --subuser= NAME :swift --access=full",
"radosgw-admin subuser create --uid=testuser --subuser=testuser:swift --access=full { \"user_id\": \"testuser\", \"display_name\": \"First User\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [ { \"id\": \"testuser:swift\", \"permissions\": \"full-control\" } ], \"keys\": [ { \"user\": \"testuser\", \"access_key\": \"O8JDE41XMI74O185EHKD\", \"secret_key\": \"i4Au2yxG5wtr1JK01mI8kjJPM93HNAoVWOSTdJd6\" } ], \"swift_keys\": [ { \"user\": \"testuser:swift\", \"secret_key\": \"13TLtdEW7bCqgttQgPzxFxziu0AgabtOc6vM8DLA\" } ], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }",
"radosgw-admin key create --subuser= NAME :swift --key-type=swift --gen-secret",
"radosgw-admin key create --subuser=testuser:swift --key-type=swift --gen-secret { \"user_id\": \"testuser\", \"display_name\": \"First User\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [ { \"id\": \"testuser:swift\", \"permissions\": \"full-control\" } ], \"keys\": [ { \"user\": \"testuser\", \"access_key\": \"O8JDE41XMI74O185EHKD\", \"secret_key\": \"i4Au2yxG5wtr1JK01mI8kjJPM93HNAoVWOSTdJd6\" } ], \"swift_keys\": [ { \"user\": \"testuser:swift\", \"secret_key\": \"a4ioT4jEP653CDcdU8p4OuhruwABBRZmyNUbnSSt\" } ], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }",
"subscription-manager repos --enable=rhel-9-for-x86_64-highavailability-rpms",
"dnf install python3-boto3",
"vi s3test.py",
"import boto3 endpoint = \"\" # enter the endpoint URL along with the port \"http:// URL : PORT \" access_key = ' ACCESS ' secret_key = ' SECRET ' s3 = boto3.client( 's3', endpoint_url=endpoint, aws_access_key_id=access_key, aws_secret_access_key=secret_key ) s3.create_bucket(Bucket='my-new-bucket') response = s3.list_buckets() for bucket in response['Buckets']: print(\"{name}\\t{created}\".format( name = bucket['Name'], created = bucket['CreationDate'] ))",
"python3 s3test.py",
"my-new-bucket 2022-05-31T17:09:10.000Z",
"sudo yum install python-setuptools sudo easy_install pip sudo pip install --upgrade setuptools sudo pip install --upgrade python-swiftclient",
"swift -A http:// IP_ADDRESS : PORT /auth/1.0 -U testuser:swift -K ' SWIFT_SECRET_KEY ' list",
"swift -A http://10.10.143.116:80/auth/1.0 -U testuser:swift -K '244+fz2gSqoHwR3lYtSbIyomyPHf3i7rgSJrF/IA' list",
"my-new-bucket"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/object_gateway_guide/testing
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.362_release_notes/making-open-source-more-inclusive
|
Chapter 1. Introducing RHEL on public cloud platforms
|
Chapter 1. Introducing RHEL on public cloud platforms Public cloud platforms provide computing resources as a service. Instead of using on-premises hardware, you can run your IT workloads, including Red Hat Enterprise Linux (RHEL) systems, as public cloud instances. 1.1. Benefits of using RHEL in a public cloud RHEL as a cloud instance located on a public cloud platform has the following benefits over RHEL on-premises physical systems or virtual machines (VMs): Flexible and fine-grained allocation of resources A cloud instance of RHEL runs as a VM on a cloud platform, which typically means a cluster of remote servers maintained by the provider of the cloud service. Therefore, allocating hardware resources to the instance, such as a specific type of CPU or storage, happens on the software level and is easily customizable. In comparison to a local RHEL system, you are also not limited by the capabilities of your physical host. Instead, you can choose from a variety of features, based on selection offered by the cloud provider. Space and cost efficiency You do not need to own any on-premises servers to host your cloud workloads. This avoids the space, power, and maintenance requirements associated with physical hardware. Instead, on public cloud platforms, you pay the cloud provider directly for using a cloud instance. The cost is typically based on the hardware allocated to the instance and the time you spend using it. Therefore, you can optimize your costs based on your requirements. Software-controlled configurations The entire configuration of a cloud instance is saved as data on the cloud platform, and is controlled by software. Therefore, you can easily create, remove, clone, or migrate the instance. A cloud instance is also operated remotely in a cloud provider console and is connected to remote storage by default. In addition, you can back up the current state of a cloud instance as a snapshot at any time. Afterwards, you can load the snapshot to restore the instance to the saved state. Separation from the host and software compatibility Similarly to a local VM, the RHEL guest operating system on a cloud instance runs on a virtualized kernel. This kernel is separate from the host operating system and from the client system that you use to connect to the instance. Therefore, any operating system can be installed on the cloud instance. This means that on a RHEL public cloud instance, you can run RHEL-specific applications that cannot be used on your local operating system. In addition, even if the operating system of the instance becomes unstable or is compromised, your client system is not affected in any way. Additional resources What is public cloud? What is a hyperscaler? Types of cloud computing Public cloud use cases for RHEL Obtaining RHEL for public cloud deployments 1.2. Public cloud use cases for RHEL Deploying on a public cloud provides many benefits, but might not be the most efficient solution in every scenario. If you are evaluating whether to migrate your RHEL deployments to the public cloud, consider whether your use case will benefit from the advantages of the public cloud. Beneficial use cases Deploying public cloud instances is very effective for flexibly increasing and decreasing the active computing power of your deployments, also known as scaling up and scaling down . Therefore, using RHEL on public cloud is recommended in the following scenarios: Clusters with high peak workloads and low general performance requirements. Scaling up and down based on your demands can be highly efficient in terms of resource costs. Quickly setting up or expanding your clusters. This avoids high upfront costs of setting up local servers. Cloud instances are not affected by what happens in your local environment. Therefore, you can use them for backup and disaster recovery. Potentially problematic use cases You are running an existing environment that cannot be adjusted. Customizing a cloud instance to fit the specific needs of an existing deployment may not be cost-effective in comparison with your current host platform. You are operating with a hard limit on your budget. Maintaining your deployment in a local data center typically provides less flexibility but more control over the maximum resource costs than the public cloud does. steps Obtaining RHEL for public cloud deployments Additional resources Should I migrate my application to the cloud? Here's how to decide. 1.3. Frequent concerns when migrating to a public cloud Moving your RHEL workloads from a local environment to a public cloud platform might raise concerns about the changes involved. The following are the most commonly asked questions. Will my RHEL work differently as a cloud instance than as a local virtual machine? In most respects, RHEL instances on a public cloud platform work the same as RHEL virtual machines on a local host, such as an on-premises server. Notable exceptions include: Instead of private orchestration interfaces, public cloud instances use provider-specific console interfaces for managing your cloud resources. Certain features, such as nested virtualization, may not work correctly. If a specific feature is critical for your deployment, check the feature's compatibility in advance with your chosen public cloud provider. Will my data stay safe in a public cloud as opposed to a local server? The data in your RHEL cloud instances is in your ownership, and your public cloud provider does not have any access to it. In addition, major cloud providers support data encryption in transit, which improves the security of data when migrating your virtual machines to the public cloud. The general security of your RHEL public cloud instances is managed as follows: Your public cloud provider is responsible for the security of the cloud hypervisor Red Hat provides the security features of the RHEL guest operating systems in your instances You manage the specific security settings and practices in your cloud infrastructure What effect does my geographic region have on the functionality of RHEL public cloud instances? You can use RHEL instances on a public cloud platform regardless of your geographical location. Therefore, you can run your instances in the same region as your on-premises server. However, hosting your instances in a physically distant region might cause high latency when operating them. In addition, depending on the public cloud provider, certain regions may provide additional features or be more cost-efficient. Before creating your RHEL instances, review the properties of the hosting regions available for your chosen cloud provider. 1.4. Obtaining RHEL for public cloud deployments To deploy a RHEL system in a public cloud environment, you need to: Select the optimal cloud provider for your use case, based on your requirements and the current offer on the market. The cloud providers currently certified for running RHEL instances are: Amazon Web Services (AWS) For more information, see Deploying RHEL 9 on Amazon Web Services . Google Cloud Platform (GCP) For more information, see Deploying RHEL 9 on Google Cloud Platform . Microsoft Azure For more information, see Deploying RHEL 9 on Microsoft Azure . Create a RHEL cloud instance on your chosen cloud platform. For more information, see Methods for creating RHEL cloud instances . To keep your RHEL deployment up-to-date, use Red Hat Update Infrastructure (RHUI). Additional resources RHUI documentation Red Hat Open Hybrid Cloud 1.5. Methods for creating RHEL cloud instances To deploy a RHEL instance on a public cloud platform, you can use one of the following methods: Create a system image of RHEL and import it to the cloud platform. To create the system image, you can use the RHEL image builder or you can build the image manually. This method uses your existing RHEL subscription, and is also referred to as bring your own subscription (BYOS). You pre-pay a yearly subscription, and you can use your Red Hat customer discount. Your customer service is provided by Red Hat. For creating multiple images effectively, you can use the cloud-init tool. Purchase a RHEL instance directly from the cloud provider marketplace. You post-pay an hourly rate for using the service. Therefore, this method is also referred to as pay as you go (PAYG). Your customer service is provided by the cloud platform provider. Additional resources What is a golden image?
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_cloud-init_for_rhel_9/introducing-rhel-on-public-cloud-platforms_cloud-content
|
Chapter 27. Hoist Field Action
|
Chapter 27. Hoist Field Action Wrap data in a single field 27.1. Configuration Options The following table summarizes the configuration options available for the hoist-field-action Kamelet: Property Name Description Type Default Example field * Field The name of the field that will contain the event string Note Fields marked with an asterisk (*) are mandatory. 27.2. Dependencies At runtime, the hoist-field-action Kamelet relies upon the presence of the following dependencies: github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT camel:core camel:jackson camel:kamelet 27.3. Usage This section describes how you can use the hoist-field-action . 27.3.1. Knative Action You can use the hoist-field-action Kamelet as an intermediate step in a Knative binding. hoist-field-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: hoist-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: hoist-field-action properties: field: "The Field" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 27.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 27.3.1.2. Procedure for using the cluster CLI Save the hoist-field-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f hoist-field-action-binding.yaml 27.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step hoist-field-action -p "step-0.field=The Field" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 27.3.2. Kafka Action You can use the hoist-field-action Kamelet as an intermediate step in a Kafka binding. hoist-field-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: hoist-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: hoist-field-action properties: field: "The Field" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 27.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 27.3.2.2. Procedure for using the cluster CLI Save the hoist-field-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f hoist-field-action-binding.yaml 27.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step hoist-field-action -p "step-0.field=The Field" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 27.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/hoist-field-action.kamelet.yaml
|
[
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: hoist-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: hoist-field-action properties: field: \"The Field\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f hoist-field-action-binding.yaml",
"kamel bind timer-source?message=Hello --step hoist-field-action -p \"step-0.field=The Field\" channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: hoist-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: hoist-field-action properties: field: \"The Field\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f hoist-field-action-binding.yaml",
"kamel bind timer-source?message=Hello --step hoist-field-action -p \"step-0.field=The Field\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/hoist-field-action
|
6.13. Tools
|
6.13. Tools mvapich2 component The mvapich2 packages use the GNU Autotools set of tools ( autoconf , automake , and libtool ) to process its configuration. Features included in version 1.12 and later are required, but are not available in Red Hat Enterprise Linux 6.6 and earlier. As a consequence, rebuilding mvapich2 fails with earlier versions of GNU Autotools . To work around this problem, uninstall the autoconf , automake , and libtool packages, rebuild mvapich2 , and then reinstall GNU Autotools . freeipmi component, BZ# 1020650 Under certain circumstances, the IPMI service is not started and the ipmi_devintf kernel module that provides the device node interface is not loaded. As a consequence, some hardware could reboot unexpectedly after installation before the first intentional reboot. To work around this problem, run the following commands as root: Alternatively, log in as root, create the /etc/modprobe.d/watchdog-reboot-workaround.conf file, and include the following three aliases: alias acpi:IPI000*:* ipmi_si alias acpi:IPI000*:* ipmi_devintf alias acpi:IPI000*:* ipmi_msghandler ssh-keygen component The following example in the description of the -V option in the ssh-keygen(1) manual page is incorrect: If you set a date range in this format, the certificate is valid from four weeks ago until now. perl-WWW-curl component Attempting to access the CURLINFO_PRIVATE value can cause curl to terminate unexpectedly with a segmentation fault. freerpd component, BZ# 988277 The ALSA plug-in is not supported in Red Hat Enterprise Linux 6. Instead of the ALSA plug-in, use the pulseaudio plug-in. To enable it, use the --plugin rpdsnd option with the xfreerdp command without specifying which plug-in should be used; the pulseaudio plug-in will be used automatically in this case. coolkey component, BZ# 906537 Personal Identity Verification (PIV) Endpoint Cards which support both CAC and PIV interfaces might not work with the latest coolkey update; some signature operations like PKINIT can fail. To work around this problem, downgrade coolkey to the version shipped with Red Hat Enterprise Linux 6.3. libreport component Even if the stored credentials are used , the report-gtk utility can report the following error message: To work around this problem, close the dialog window; the Login=<rhn-user> and Password=<rhn-password> credentials in the /etc/libreport/plugins/rhtsupport.conf will be used in the same way they are used by report-rhtsupport . For more information, refer to this Knowledge Base article. vlock component When a user password is used to lock a console with vlock , the console can only be unlocked with the user password, not the root password. That is, even if the first inserted password is incorrect, and the user is prompted to provide the root password, entering the root password fails with an error message. libreoffice component Libreoffice contains a number of harmless files used for testing purposes. However, on Microsoft Windows system, these files can trigger false positive alerts on various anti-virus software, such as Microsoft Security Essentials. For example, the alerts can be triggered when scanning the Red Hat Enterprise Linux 6 ISO file. gnome-power-manager component When the computer runs on battery, custom brightness level is not remembered and restored if power saving features like "dim display when idle" or "reduce backlight brightness when idle" are enabled. rsyslog component rsyslog does not reload its configuration after a SIGHUP signal is issued. To reload the configuration, the rsyslog daemon needs to be restarted:
|
[
"chkconfig --level 345 ipmi on service ipmi restart service bmc-watchdog condrestart",
"\"-4w:+4w\" (valid from four weeks ago to four weeks from now)",
"Wrong settings detected for Red Hat Customer Support [..]",
"~]# service rsyslog restart"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/tools_issues
|
Chapter 3. Recommended resource requirements for Red Hat Advanced Cluster Security for Kubernetes
|
Chapter 3. Recommended resource requirements for Red Hat Advanced Cluster Security for Kubernetes The recommended resource guidelines were developed by performing a focused test that created the following objects across a given number of namespaces: 10 deployments, with 3 pod replicas in a sleep state, mounting 4 secrets, 4 config maps 10 services, each one pointing to the TCP/8080 and TCP/8443 ports of one of the deployments 1 route pointing to the first of the services 10 secrets containing 2048 random string characters 10 config maps containing 2048 random string characters During the analysis of results, the number of deployments is identified as a primary factor for increasing of used resources. And we are using the number of deployments for the estimation of required resources. Additional resources Default resource requirements 3.1. Central services (self-managed) Note If you are using Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service), you do not need to review the requirements for Central services, because they are managed by Red Hat. You only need to look at the requirements for secured cluster services. Central services contain the following components: Central Central DB Scanner Note For default resource requirements for the scanner, see the default resource requirements page. 3.1.1. Central Memory and CPU requirements The following table lists the minimum memory and CPU values required to run Central for one secured cluster. The table includes the number of concurrent web portal users. Deployments Concurrent web portal users CPU Memory < 25,000 1 user 2 cores 8 GiB < 25,000 < 5 users 2 cores 8 GiB < 50,000 1 user 2 cores 12 GiB < 50,000 < 5 users 6 cores 16 GiB 3.1.2. Central DB Memory and CPU requirements The following table lists the minimum memory and CPU values required to run Central DB for one secured cluster. The table includes the number of concurrent web portal users. Deployments Concurrent web portal users CPU Memory < 25,000 1 user 12 cores 32 GiB < 25,000 < 5 users 24 cores 32 GiB < 50,000 1 user 16 cores 32 GiB < 50,000 < 5 users 32 cores 32 GiB 3.1.3. Scanner StackRox Scanner Memory and CPU requirements The following table lists the minimum memory and CPU values required for the StackRox Scanner deployment in the Central cluster. The table includes the number of unique images deployed in all secured clusters. Unique Images Replicas CPU Memory < 100 1 replica 1 core 1.5 GiB < 500 1 replica 2 cores 2.5 GiB < 2000 2 replicas 2 cores 2.5 GiB < 5000 3 replicas 2 cores 2.5 GiB Additional resources Default resource requirements 3.2. Secured cluster services Secured cluster services contain the following components: Sensor Admission controller Collector Note Collector component is not included on this page. Required resource requirements are listed on the default resource requirements page. 3.2.1. Sensor Sensor monitors your Kubernetes and OpenShift Container Platform clusters. These services currently deploy in a single deployment, which handles interactions with the Kubernetes API and coordinates with Collector. Memory and CPU requirements The following table lists the minimum memory and CPU values required to run Sensor on a secured cluster. Deployments CPU Memory < 25,000 2 cores 10 GiB < 50,000 2 cores 20 GiB 3.2.2. Admission controller The admission controller prevents users from creating workloads that violate policies that you configure. Memory and CPU requirements The following table lists the minimum memory and CPU values required to run the admission controller on a secured cluster. Deployments CPU Memory < 25,000 0.5 cores 300 MiB < 50,000 0.5 cores 600 MiB
| null |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/installing/acs-recommended-requirements
|
Chapter 20. Authenticating third-party clients through RH-SSO
|
Chapter 20. Authenticating third-party clients through RH-SSO To use the different remote services provided by Business Central or by KIE Server, your client, such as curl, wget, web browser, or a custom REST client, must authenticate through the RH-SSO server and have a valid token to perform the requests. To use the remote services, the authenticated user must have the following roles: rest-all for using Business Central remote services. kie-server for using the KIE Server remote services. Use the RH-SSO Admin Console to create these roles and assign them to the users that will consume the remote services. Your client can authenticate through RH-SSO using one of these options: Basic authentication, if it is supported by the client Token-based authentication 20.1. Basic authentication If you enabled basic authentication in the RH-SSO client adapter configuration for both Business Central and KIE Server, you can avoid the token grant and refresh calls and call the services as shown in the following examples: For web based remote repositories endpoint: For KIE Server: 20.2. Token-based authentication If you want a more secure option of authentication, you can consume the remote services from both Business Central and KIE Server by using a granted token provided by RH-SSO. Procedure In the RH-SSO Admin Console, click the Clients menu item and click Create to create a new client. The Add Client page opens. On the Add Client page, provide the required information to create a new client for your realm. For example: Client ID : kie-remote Client protocol : openid-connect Click Save to save your changes. Change the token settings in Realm Settings : In the RH-SSO Admin Console, click the Realm Settings menu item. Click the Tokens tab. Change the value for Access Token Lifespan to 15 minutes. This gives you enough time to get a token and invoke the service before it expires. Click Save to save your changes. After a public client for your remote clients is created, you can now obtain the token by making an HTTP request to the RH-SSO server's token endpoint using: The user in this command is a Business Central RH-SSO user. For more information, see Section 17.1, "Adding Red Hat Decision Manager users" . To view the token obtained from the RH-SSO server, use the following command: You can now use this token to authorize the remote calls. For example, if you want to check the internal Red Hat Decision Manager repositories, use the token as shown below:
|
[
"curl http://admin:password@localhost:8080/business-central/rest/repositories",
"curl http://admin:password@localhost:8080/kie-server/services/rest/server/",
"RESULT=`curl --data \"grant_type=password&client_id=kie-remote&username=admin&password=password\" http://localhost:8180/auth/realms/demo/protocol/openid-connect/token`",
"TOKEN=`echo USDRESULT | sed 's/.*access_token\":\"//g' | sed 's/\".*//g'`",
"curl -H \"Authorization: bearer USDTOKEN\" http://localhost:8080/business-central/rest/repositories"
] |
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/integrating_red_hat_decision_manager_with_other_products_and_components/sso-third-party-proc_integrate-sso
|
Chapter 28. Integrating RHEL systems directly with AD using RHEL System Roles
|
Chapter 28. Integrating RHEL systems directly with AD using RHEL System Roles With the ad_integration System Role, you can automate a direct integration of a RHEL system with Active Directory (AD) using Red Hat Ansible Automation Platform. This chapter covers the following topics: The ad_integration System Role Variables for the ad_integration RHEL System Role Connecting a RHEL system directly to AD using the ad_integration System Role 28.1. The ad_integration System Role Using the ad_integration System Role, you can directly connect a RHEL system to Active Directory (AD). The role uses the following components: SSSD to interact with the central identity and authentication source realmd to detect available AD domains and configure the underlying RHEL system services, in this case SSSD, to connect to the selected AD domain Note The ad_integration role is for deployments using direct AD integration without an Identity Management (IdM) environment. For IdM environments, use the ansible-freeipa roles. Additional resources Connecting RHEL systems directly to AD using SSSD . 28.2. Variables for the ad_integration RHEL System Role The ad_integration RHEL System Role uses the following parameters: Role Variable Description ad_integration_realm Active Directory realm, or domain name to join. ad_integration_password The password of the user used to authenticate with when joining the machine to the realm. Do not use plain text. Instead, use Ansible Vault to encrypt the value. ad_integration_manage_crypto_policies If true , the ad_integration role will use fedora.linux_system_roles.crypto_policies as needed. Default: false ad_integration_allow_rc4_crypto If true , the ad_integration role will set the crypto policy to allow RC4 encryption. Providing this variable automatically sets ad_integration_manage_crypto_policies to true . Default: false ad_integration_timesync_source Hostname or IP address of time source to synchronize the system clock with. Providing this variable automatically sets ad_integration_manage_timesync to true . Additional resources The /usr/share/ansible/roles/rhel-system-roles.ad_integration/README.md file. 28.3. Connecting a RHEL system directly to AD using the ad_integration System Role You can use the ad_integration System Role to configure a direct integration between a RHEL system and an AD domain by running an Ansible playbook. Note Starting with RHEL8, RHEL no longer supports RC4 encryption by default. If it is not possible to enable AES in the AD domain, you must enable the AD-SUPPORT crypto policy and allow RC4 encryption in the playbook. Important Time between the RHEL server and AD must be synchronized. You can ensure this by using the timesync System Role in the playbook. In this example, the RHEL system joins the domain.example.com AD domain, using the AD Administrator user and the password for this user stored in the Ansible vault. The playbook also sets the AD-SUPPORT crypto policy and allows RC4 encryption. To ensure time synchronization between the RHEL system and AD, the playbook sets the adserver.domain.example.com server as the timesync source. Prerequisites Access and permissions to one or more managed nodes . Access and permissions to a control node . On the control node: Red Hat Ansible Engine is installed. The rhel-system-roles package is installed. An inventory file which lists the managed nodes. The following ports on the AD domain controllers are open and accessible from the RHEL server: Table 28.1. Ports Required for Direct Integration of Linux Systems into AD Using the ad_integration System Role Source Port Destination Port Protocol Service 1024:65535 53 UDP and TCP DNS 1024:65535 389 UDP and TCP LDAP 1024:65535 636 TCP LDAPS 1024:65535 88 UDP and TCP Kerberos 1024:65535 464 UDP and TCP Kerberos change/set password ( kadmin ) 1024:65535 3268 TCP LDAP Global Catalog 1024:65535 3269 TCP LDAP Global Catalog SSL/TLS 1024:65535 123 UDP NTP/Chrony (Optional) 1024:65535 323 UDP NTP/Chrony (Optional) Procedure Create a new ad_integration.yml file with the following content: --- - hosts: all vars: ad_integration_realm: "domain.example.com" ad_integration_password: !vault | vault encrypted password ad_integration_manage_crypto_policies: true ad_integration_allow_rc4_crypto: true ad_integration_timesync_source: "adserver.domain.example.com" roles: - linux-system-roles.ad_integration --- Optional: Verify playbook syntax. Run the playbook on your inventory file: Verification Display an AD user details, such as the administrator user: 28.4. Additional resources The /usr/share/ansible/roles/rhel-system-roles.ad_integration/README.md file. man ansible-playbook(1)
|
[
"--- - hosts: all vars: ad_integration_realm: \"domain.example.com\" ad_integration_password: !vault | vault encrypted password ad_integration_manage_crypto_policies: true ad_integration_allow_rc4_crypto: true ad_integration_timesync_source: \"adserver.domain.example.com\" roles: - linux-system-roles.ad_integration ---",
"ansible-playbook --syntax-check ad_integration.yml -i inventory_file",
"ansible-playbook -i inventory_file /path/to/file/ad_integration.yml",
"getent passwd [email protected] [email protected]:*:1450400500:1450400513:Administrator:/home/[email protected]:/bin/bash"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/automating_system_administration_by_using_rhel_system_roles_in_rhel_7.9/integrating-rhel-systems-directly-with-ad-using-rhel-system-roles_automating-system-administration-by-using-rhel-system-roles
|
Chapter 3. Adding cloud integrations to the Hybrid Cloud Console
|
Chapter 3. Adding cloud integrations to the Hybrid Cloud Console You can connect Amazon Web Services (AWS), Google Cloud, Microsoft Azure, or Oracle Cloud accounts as cloud integrations in the Red Hat Hybrid Cloud Console so that services hosted on the Hybrid Cloud Console can use data from public cloud providers. 3.1. Amazon Web Services (AWS) integrations with the Hybrid Cloud Console You can connect your AWS account to the following services in the Red Hat Hybrid Cloud Console: Cost management Connect your AWS account to cost management to track your cloud costs. You can use the cost management service to perform financially related tasks, such as: Visualizing, understanding, and analyzing the use of resources and costs Forecasting your future consumption and comparing them with budgets Optimizing resources and consumption Identifying patterns of usage for further analysis Integrating with third-party tools that can benefit from cost and resourcing data RHEL management bundle Connect your AWS account to the RHEL management bundle in the Hybrid Cloud Console to use your Red Hat product subscriptions on AWS. The RHEL management bundle grants access to additional capabilities which are useful to deploying Red Hat products on the public cloud, including: Red Hat gold images: You can use Red Hat cloud images in AWS and bring your own subscription instead of paying hourly. Autoregistration: This allows cloud instances to automatically connect to console.redhat.com when provisioned so you can use Red Hat Insights services. Important To use RHEL management, you must enable Simple Content Access. See the Red Hat Knowledge Article Simple Content Access for more information. Launch images Connect your AWS account to build and launch customized images as virtual machines in hybrid cloud environments. This workflow uses the launch images service, which is included in every Red Hat subscription, to deploy and manage Red Hat Enterprise Linux (RHEL) systems in AWS. 3.1.1. Adding an Amazon Web Services (AWS) account as a cloud integration You can connect your AWS account to the Red Hat Hybrid Cloud Console as a cloud integration so that you can use your AWS data with Hybrid Cloud Console services. You can create your integration using the account authorization method and let Red Hat configure and manage your integration for you. If you choose this method, you must provide the access key ID and the secret access key for your AWS account. This is the recommended method. However, if you do not want to provide your AWS account credentials to Red Hat, you can configure your integration manually. After adding your AWS integration, you can view and manage your AWS and other integrations from the Integrations page in the Hybrid Cloud Console. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console as an Organization Administrator or as a user with Cloud Administrator permissions. You have access to an AWS account that you want to use with the Hybrid Cloud Console that has the following Identity and Access Management (IAM) roles: CreatePolicy CreateRole AttachRolePolicy GetPolicy GetRole To create your AWS integration using the account authorization configuration method (recommended), you have the access key ID and the secret access key for your AWS account. To use the launch images service with your AWS integration, your AWS account must have the following permissions and roles: cloudformation:CreateStack cloudformation:DescribeStacks cloudformation:DeleteStack cloudformation:UpdateStack iam:CreateRole iam:PutRolePolicy iam:AttachRolePolicy iam:PassRole iam:GetRole iam:DeleteRole iam:ListRolePolicies iam:GetRolePolicy iam:DeleteRolePolicy Procedure Go to Settings > Integrations and select the Cloud tab. Click Add integration to open the Add a cloud integration wizard. If this is the first integration you are adding, skip this step. Select Amazon Web Services , and then click . Enter a descriptive name for the integration, for example, my_aws_integration , and then click . Select a configuration mode: Select Account authorization to allow Red Hat to configure and manage the integration for you after you provide your AWS credentials. This is the recommended configuration mode. Enter your AWS access key ID and secret access key and click . The Select applications page appears with Cost Management , Launch images , and RHEL management services selected. Deselect any services that you do not want your integration to connect to, and then click . Note You can choose to deselect all services in this step. You can connect additional Hybrid Cloud Console services after you finish creating the AWS integration. Select Manual configuration and click to configure your integration manually if you do not want to enter your AWS account authorization credentials. Optional: Select a service to connect to your integration. Click . Follow the instructions in the integration wizard. Note If you selected Cost Management , see Integrating Amazon Web Services (AWS) data into cost management for detailed instructions. On the Review details page, review the details of the integration and then click Add . Your AWS integration is added to the Hybrid Cloud Console. Verification Go to the Integrations page and select the Cloud tab. Confirm that your AWS integration is listed and the status is Ready . 3.2. Microsoft Azure integrations with the Hybrid Cloud Console Connect your Microsoft Azure account with the Hybrid Cloud Console to receive the following benefits, depending on the services that you connect with: Gold images Auto-registration of provisioned systems Subscription reporting Red Hat Insights You can connect your Microsoft Azure account to the following services in the Red Hat Hybrid Cloud Console: Cost management Connect your Microsoft Azure account to cost management to track your cloud costs. You can use the cost management service to perform financially related tasks, such as: Visualizing, understanding, and analyzing the use of resources and costs Forecasting your future consumption and comparing them with budgets Optimizing resources and consumption Identifying patterns of usage for further analysis Integrating with third-party tools that can benefit from cost and resourcing data Launch images Connect your Microsoft Azure account to build and launch customized images as virtual machines in hybrid cloud environments. Use the launch images service, which is included in every Red Hat subscription, to deploy and manage Red Hat Enterprise Linux (RHEL) systems in Microsoft Azure. RHEL management bundle Connect your Microsoft Azure account to the RHEL management bundle in the Hybrid Cloud Console to use your existing Red Hat product subscriptions on Microsoft Azure. The RHEL management bundle grants access to additional capabilities which are useful to deploying Red Hat products on the public cloud, including: Red Hat gold images: You can use Red Hat cloud images in Microsoft Azure and bring your own subscription instead of paying hourly. Autoregistration: This allows cloud instances to automatically connect to console.redhat.com when provisioned so you can use Red Hat Insights services. Important To use RHEL management, you must enable Simple Content Access. See the Red Hat Knowledge Article Simple Content Access for more information. Azure Lighthouse Azure Lighthouse is a Microsoft Azure service that provides secure access control and managed services for customers and partners. If you add the launch images service or the RHEL management bundle to your Microsoft Azure integration, the Hybrid Cloud Console cloud integrations wizard takes you to Azure Lighthouse to deploy a custom template to link your Red Hat and Microsoft Azure accounts. In your Azure account, deploying the template sets up two Azure roles for RHEL Management: Reader : This role allows the Hybrid Cloud Console to view all resources, but it cannot make any changes. See the Azure documentation for information about this role. Managed Services Registration assignment Delete : This role enables clean-up of the authorization when you remove the Hybrid Cloud Console integration. See the Azure documentation for information about this role. For more information about the Azure Resource Manager template, see Deploy the Azure Resource Manager template in the Azure documentation. 3.2.1. Adding a Microsoft Azure account as a cloud integration You can connect your Microsoft Azure account to the Red Hat Hybrid Cloud Console as a cloud integration so that you can use your Microsoft Azure data with Hybrid Cloud Console services. After adding your Azure integration, you can view and manage your Azure and other integrations from the Integrations page in the Hybrid Cloud Console. Note To access gold images, create an integration for any Azure tenant subscription ID within an Azure tenant. When a single Azure subscription ID is integrated, Red Hat automatically retrieves the Azure tenant ID and enables gold image access at the tenant level. However, to use auto-registration and launch images you must create an integration for each individual Azure tenant subscription ID. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console as an Organization Administrator or as a user with Cloud Administrator permissions. You have access to a Microsoft Azure account that you want to use with the Hybrid Cloud Console. Your Microsoft Azure user account has the minimum permissions required to use the Red Hat services that you want to add to your integration: To use the RHEL management and launch images services with your Microsoft Azure integration, you must have a non-guest account in your tenant who has a role with the Microsoft.Authorization/roleAssignments/write permission, such as owner , for the Azure subscription you are using. See the following Microsoft Azure documentation for more information: Azure built-in roles Deploy the Azure Resource Manager template To use the launch images service with your Microsoft Azure integration, you have registered the following resource providers in your Microsoft Azure subscription: Microsoft.Compute Microsoft.Storage Microsoft.Network Procedure Go to Settings > Integrations . Select the Cloud tab. Click Add integration to open the Add a cloud integration wizard. If this is the first integration you are adding, skip this step. Select Microsoft Azure , and then click . Enter a descriptive name for the integration, for example, Azure_build , and then click . Optional: Select a service to connect with Microsoft Azure. You can choose to create the integration without selecting a service. You can connect Hybrid Cloud Console services after you create the Microsoft Azure integration. Follow the instructions in the integration wizard. If you selected Cost Management , see Integrating Microsoft Azure data into cost management for detailed instructions. If you selected Launch images or RHEL Management , complete the following steps on the Configure Azure Lighthouse page of the integration wizard: To access your Azure Lighthouse account, click Go to Lighthouse and sign in with your Microsoft Azure account credentials. On the Custom deployment page, click . Review the information on the Custom deployment page and then click Create to run the deployment. This action creates two roles in your Azure account: Reader and Managed Services Registration assignment Delete Role . Note Do not change the values on the Custom Deployment screen. These values are set by Red Hat. After the deployment is complete, click Go to subscription . On the Subscriptions page, copy the Subscription ID. Note All subscription IDs are now included under the tenant ID. If you have already created an integration and enrolled for a subscription ID, the respective tenant IDs are also enrolled. You will not be charged twice. Return to the Red Hat Hybrid Cloud Console Configure Azure Lighthouse screen and click . Paste the subscription ID that you copied previously into the Subscription ID box and click . On the Review details page, review the details of the integration and then click Add . Your Microsoft Azure integration is added to the Hybrid Cloud Console. Verification Go to the Integrations page, and select the Cloud tab. Confirm that your Azure integration is listed and the status is Ready . 3.3. Google Cloud integrations with the Hybrid Cloud Console You can connect your Google Cloud account to the following services in the Red Hat Hybrid Cloud Console: Cost management Connect your Google Cloud account to cost management to track your cloud costs. You can use the cost management service to perform financially related tasks, such as: Visualizing, understanding, and analyzing the use of resources and costs Forecasting your future consumption and comparing them with budgets Optimizing resources and consumption Identifying patterns of usage for further analysis Integrating with third-party tools that can benefit from cost and resourcing data RHEL management bundle Connect your Google Cloud account to the RHEL management bundle in the Hybrid Cloud Console to use your Red Hat product subscriptions on Google Cloud. The RHEL management bundle grants access to additional capabilities which are useful to deploying Red Hat products on the public cloud, including: Red Hat gold images: You can use Red Hat cloud images in Google Cloud and bring your own subscription instead of paying hourly. Autoregistration: This allows cloud instances to automatically connect to console.redhat.com when provisioned so you can use Red Hat Insights services. Important To use RHEL management, you must enable Simple Content Access. See the Red Hat Knowledge Article Simple Content Access for more information. Launch images Connect your Google Cloud account to build and launch customized images as virtual machines in hybrid cloud environments. This workflow uses the launch images service, which is included in every Red Hat subscription, to deploy and manage Red Hat Enterprise Linux (RHEL) systems in Google Cloud. 3.3.1. Adding a Google Cloud account as a cloud integration You can connect your Google Cloud account to the Red Hat Hybrid Cloud Console as a cloud integration so that you can use your Google Cloud data with Hybrid Cloud Console services. After adding your Google Cloud integration, you can view and manage your Google Cloud and other integrations from the Integrations page in the Hybrid Cloud Console. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console as an Organization Administrator or as a user with Cloud Administrator permissions. You have access to a Google Cloud account that you want to use with the Hybrid Cloud Console. To use the launch images service with your Google Cloud integration, you have a Google Cloud project with a default network. Procedure Go to Settings > Integrations and select the Cloud tab. Click Add integration to open the Add a cloud integration wizard. If this is the first integration you are adding, skip this step. Select Google Cloud , and then click . Enter a descriptive name for the integration, for example, my_gcp_integration , and then click . Optional: Select a service to connect with Google Cloud. You can choose to create the integration without selecting a service. You can connect Hybrid Cloud Console services after you create the Google Cloud integration. Click . Follow the instructions in the integration wizard. Note If you selected Cost Management , see Integrating Google Cloud data into cost management for detailed instructions. On the Review details page, review the details of the integration and then click Add . Your Google Cloud integration is added to the Hybrid Cloud Console. Verification Go to the Integrations page, and select the Cloud tab. Confirm that your Google Cloud integration is listed and the status is Ready . 3.4. Oracle Cloud integrations with the Hybrid Cloud Console You can connect your Oracle Cloud account to use with cost management in the Red Hat Hybrid Cloud Console to track your cloud costs. You can use the cost management service to perform financially related tasks, such as: Visualizing, understanding, and analyzing the use of resources and costs Forecasting your future consumption and comparing them with budgets Optimizing resources and consumption Identifying patterns of usage for further analysis Integrating with third-party tools that can benefit from cost and resourcing data 3.4.1. Adding an Oracle Cloud or account as a cloud integration You can connect your Oracle Cloud account to the Red Hat Hybrid Cloud Console as a cloud integration to use your Oracle Cloud data with the Hybrid Cloud Console cost management service. After adding your Oracle Cloud integration, you can view and manage your integrations from the Integrations page in the Hybrid Cloud Console. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console as an Organization Administrator or as a user with Cloud Administrator permissions. You have access to Oracle Cloud Console with access to the compartment you want to add to cost management. Procedure Go to Settings > Integrations and select the Cloud tab. Click Add integration to open the Add a cloud integration wizard. If this is the first integration you are adding, skip this step. Select Oracle Cloud Infrastructure , and then click . Enter a descriptive name for the integration, for example, my_cloud_integration , and then click . The Select application page appears. Cost Management is the only service available and it is selected. Click . Follow the steps in the wizard. Refer to the instructions in Integrating Oracle Cloud data into cost management to complete adding the Oracle Cloud integration to cost management. On the Review details page, review the details of the integration and then click Add . Your Oracle Cloud integration is added to the Hybrid Cloud Console. Verification Go to the Integrations page, and select the Cloud tab. Confirm that your Oracle Cloud integration is listed and the status is Ready .
| null |
https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/configuring_cloud_integrations_for_red_hat_services/assembly-adding-cloud-integrations_crc-cloud-integrations
|
Preface
|
Preface Preface
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_the_streams_for_apache_kafka_console/preface
|
Chapter 3. Data Grid on OpenShift
|
Chapter 3. Data Grid on OpenShift 3.1. Data Grid 8.4 images Data Grid 8.4 includes two container images, the Data Grid Operator image and Data Grid Server image. Data Grid images are hosted on the Red Hat Container Registry, where you can find health indexes for the images along with information about each tagged version. Custom Data Grid Deployments Red Hat does not support customization of any 8.4 images from the Red Hat Container Registry through the Source-to-Image (S2I) process or ConfigMap API. As a result it is not possible to use custom: Discovery protocols JGroups SYM_ENCRYPT or ASYM_ENCRYPT encryption mechanisms Additional resources Data Grid Container Images 3.2. Embedded caches on OpenShift Using embedded Data Grid caches in applications running on OpenShift, which was referred to as Library Mode in releases, is intended for specific uses only: Using local or distributed caching in custom Java applications to retain full control of the cache lifecycle. Additionally, when using features that are available only with embedded Data Grid such as distributed streams. Reducing network latency to improve the speed of cache operations. The Hot Rod protocol provides near-cache capabilities that achieve equivalent performance to a standard client-server architecture. Requirements Embedding Data Grid in applications running on OpenShift requires you to use a discovery mechanism so Data Grid nodes can form clusters to replicate and distribute data. Red Hat supports only DNS_PING as the cluster discovery mechanism. DNS_PING exposes a port named ping that Data Grid nodes use to perform discovery and join clusters. TCP is the only supported protocol for the ping port, as in the following example for a pod on OpenShift: spec: ... ports: - name: ping port: 8888 protocol: TCP targetPort: 8888 Limitations Embedding Data Grid in applications running on OpenShift also has some specific limitations: Persistent cache stores are not currently supported. UDP is not supported with embedded Data Grid. Custom caching services Red Hat highly discourages embedding Data Grid to build custom caching servers to handle remote client requests. To benefit from regular, automatic updates with performance improvements and fix security issues, you should create Data Grid clusters with the Data Grid Operator instead. Additional resources Embedding Data Grid in Java Applications
|
[
"spec: ports: - name: ping port: 8888 protocol: TCP targetPort: 8888"
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_operator_8.4_release_notes/rhdg-openshift-images
|
Chapter 11. VLAN-aware instances
|
Chapter 11. VLAN-aware instances 11.1. VLAN trunks and VLAN transparent networks VM instances can send and receive VLAN-tagged traffic over a single virtual NIC. This is particularly useful for NFV applications (VNFs) that expect VLAN-tagged traffic, allowing a single virtual NIC to serve multiple customers or services. In ML2/OVN deployments you can support VLAN-aware instances using VLAN transparent networks. As an alternative in ML2/OVN or ML2/OVS deployments, you can support VLAN-aware instances using trunks. In a VLAN transparent network, you set up VLAN tagging in the VM instances. The VLAN tags are transferred over the network and consumed by the instances on the same VLAN, and ignored by other instances and devices. In a VLAN transparent network, the VLANs are managed in the instance. You do not need to set up the VLAN in the OpenStack Networking Service (neutron). VLAN trunks support VLAN-aware instances by combining VLANs into a single trunked port. For example, a project data network can use VLANs or tunneling (VXLAN, GRE, or Geneve) segmentation, while the instances see the traffic tagged with VLAN IDs. Network packets are tagged immediately before they are injected to the instance and do not need to be tagged throughout the entire network. The following table compares certain features of VLAN transparent networks and VLAN trunks. Transparent Trunk Mechanism driver support ML2/OVN ML2/OVN, ML2/OVS VLAN setup managed by VM instance OpenStack Networking Service (neutron) IP assignment Configured in VM instance Assigned by DHCP VLAN ID Flexible. You can set the VLAN ID in the instance Fixed. Instances must use the VLAN ID configured in the trunk 11.2. Enabling VLAN transparency in ML2/OVN deployments Enable VLAN transparency if you need to send VLAN tagged traffic between virtual machine (VM) instances. In a VLAN transparent network you can configure the VLANS directly in the VMs without configuring them in neutron. Prerequisites Deployment of Red Hat OpenStack Platform 16.1 or higher, with ML2/OVN as the mechanism driver. Provider network of type VLAN or Geneve. Do not use VLAN transparency in deployments with flat type provider networks. Ensure that the external switch supports 802.1q VLAN stacking using ethertype 0x8100 on both VLANs. OVN VLAN transparency does not support 802.1ad QinQ with outer provider VLAN ethertype set to 0x88A8 or 0x9100. Procedure In an environment file on the undercloud node, set the EnableVLANTransparency parameter to true . For example, add the following lines to ovn-extras.yaml . Include the environment file in the openstack overcloud deploy command with any other environment files that are relevant to your environment and deploy the overcloud: Replace <other_overcloud_environment_files> with the list of environment files that are part of your existing deployment. Create the network using the --transparent-vlan argument. Example Set up a VLAN interface on each participating VM. Set the interface MTU to 4 bytes less than the MTU of the underlay network to accommodate the extra tagging required by VLAN transparency. For example, if the underlay network MTU is 1500, set the interface MTU to 1496. The following example command adds a VLAN interface on eth0 with an MTU of 1496. The VLAN is 50 and the interface name is vlan50 : Example Set --allowed-address on the VM port. Set the allowed address to the IP address you created on the VLAN interface inside the VM in step 4. Optionally, you can also set the VLAN interface MAC address: Example The following example sets the IP address to 192.128.111.3 with the optional MAC address 00:40:96:a8:45:c4 on port fv82gwk3-qq2e-yu93-go31-56w7sf476mm0 : Verification Ping between two VMs on the VLAN using the vlan50 IP address. Use tcpdump on eth0 to see if the packets arrive with the VLAN tag intact. Additional resources Environment files in the Director Installation and Usage guide Including environment files in overcloud creation in the Director Installation and Usage guide 11.3. Reviewing the trunk plug-in During a Red Hat openStack deployment, the trunk plug-in is enabled by default. You can review the configuration on the controller nodes: On the controller node, confirm that the trunk plug-in is enabled in the /var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf file: 11.4. Creating a trunk connection To implement trunks for VLAN-tagged traffic, create a parent port and attach the new port to an existing neutron network. When you attach the new port, OpenStack Networking adds a trunk connection to the parent port you created. , create subports. These subports connect VLANs to instances, which allow connectivity to the trunk. Within the instance operating system, you must also create a sub-interface that tags traffic for the VLAN associated with the subport. Identify the network that contains the instances that require access to the trunked VLANs. In this example, this is the public network: Create the parent trunk port, and attach it to the network that the instance connects to. In this example, create a neutron port named parent-trunk-port on the public network. This trunk is the parent port, as you can use it to create subports . Create a trunk using the port that you created in step 2. In this example the trunk is named parent-trunk . View the trunk connection: View the details of the trunk connection: 11.5. Adding subports to the trunk Create a neutron port. This port is a subport connection to the trunk. You must also specify the MAC address that you assigned to the parent port: Note If you receive the error HttpException: Conflict , confirm that you are creating the subport on a different network to the one that has the parent trunk port. This example uses the public network for the parent trunk port, and private for the subport. Associate the port with the trunk ( parent-trunk ), and specify the VLAN ID ( 55 ): 11.6. Configuring an instance to use a trunk You must configure the VM instance operating system to use the MAC address that the Red Hat OpenStack Platform (RHOSP) Networking service (neutron) assigned to the subport. You can also configure the subport to use a specific MAC address during the subport creation step. Prerequisites If you are performing live migrations of your Compute nodes, ensure that the RHOSP Networking service RPC response timeout is appropriately set for your RHOSP deployment. The RPC response timeout value can vary between sites and is dependent on the system speed. The general recommendation is to set the value to at least 120 seconds per/100 trunk ports. The best practice is to measure the trunk port bind process time for your RHOSP deployment, and then set the RHOSP Networking service RPC response timeout appropriately. Try to keep the RPC response timeout value low, but also provide enough time for the RHOSP Networking service to receive an RPC response. For more information, see Section 11.7, "Configuring Networking service RPC timeout" . Procedure Review the configuration of your network trunk, using the network trunk command. Example Sample output Example Sample output Create an instance that uses the parent port-id as its vNIC. Example Sample output Additional resources Configuring Networking service RPC timeout 11.7. Configuring Networking service RPC timeout There can be situations when you must modify the Red Hat OpenStack Platform (RHOSP) Networking service (neutron) RPC response timeout. For example, live migrations for Compute nodes that use trunk ports can fail if the timeout value is too low. The RPC response timeout value can vary between sites and is dependent on the system speed. The general recommendation is to set the value to at least 120 seconds per/100 trunk ports. If your site uses trunk ports, the best practice is to measure the trunk port bind process time for your RHOSP deployment, and then set the RHOSP Networking service RPC response timeout appropriately. Try to keep the RPC response timeout value low, but also provide enough time for the RHOSP Networking service to receive an RPC response. By using a manual hieradata override, rpc_response_timeout , you can set the RPC response timeout value for the RHOSP Networking service. Procedure On the undercloud host, logged in as the stack user, create a custom YAML environment file. Example Tip The RHOSP Orchestration service (heat) uses a set of plans called templates to install and configure your environment. You can customize aspects of the overcloud with a custom environment file , which is a special type of template that provides customization for your heat templates. In the YAML environment file under ExtraConfig , set the appropriate value (in seconds) for rpc_response_timeout . (The default value is 60 seconds.) Example Note The RHOSP Orchestration service (heat) updates all RHOSP nodes with the value you set in the custom environment file, however this value only impacts the RHOSP Networking components. Run the openstack overcloud deploy command and include the core heat templates, environment files, and this new custom environment file. Important The order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence. Example Additional resources Environment files in the Director Installation and Usage guide Including environment files in overcloud creation in the Director Installation and Usage guide 11.8. Understanding trunk states ACTIVE : The trunk is working as expected and there are no current requests. DOWN : The virtual and physical resources for the trunk are not in sync. This can be a temporary state during negotiation. BUILD : There has been a request and the resources are being provisioned. After successful completion the trunk returns to ACTIVE . DEGRADED : The provisioning request did not complete, so the trunk has only been partially provisioned. It is recommended to remove the subports and try again. ERROR : The provisioning request was unsuccessful. Remove the resource that caused the error to return the trunk to a healthier state. Do not add more subports while in the ERROR state, as this can cause more issues.
|
[
"parameter_defaults: EnableVLANTransparency: true",
"openstack overcloud deploy --templates ... -e <other_overcloud_environment_files> -e ovn-extras.yaml ...",
"openstack network create network-name --transparent-vlan",
"ip link add link eth0 name vlan50 type vlan id 50 mtu 1496 ip link set vlan50 up ip addr add 192.128.111.3/24 dev vlan50",
"openstack port set --allowed-address ip-address=192.128.111.3,mac-address=00:40:96:a8:45:c4 fv82gwk3-qq2e-yu93-go31-56w7sf476mm0",
"service_plugins=router,qos,trunk",
"openstack network list +--------------------------------------+---------+--------------------------------------+ | ID | Name | Subnets | +--------------------------------------+---------+--------------------------------------+ | 82845092-4701-4004-add7-838837837621 | private | 434c7982-cd96-4c41-a8c9-b93adbdcb197 | | 8d8bc6d6-5b28-4e00-b99e-157516ff0050 | public | 3fd811b4-c104-44b5-8ff8-7a86af5e332c | +--------------------------------------+---------+--------------------------------------+",
"openstack port create --network public parent-trunk-port +-----------------------+-----------------------------------------------------------------------------+ | Field | Value | +-----------------------+-----------------------------------------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | | | binding_profile | | | binding_vif_details | | | binding_vif_type | unbound | | binding_vnic_type | normal | | created_at | 2016-10-20T02:02:33Z | | description | | | device_id | | | device_owner | | | extra_dhcp_opts | | | fixed_ips | ip_address='172.24.4.230', subnet_id='dc608964-9af3-4fed-9f06-6d3844fb9b9b' | | headers | | | id | 20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 | | mac_address | fa:16:3e:33:c4:75 | | name | parent-trunk-port | | network_id | 871a6bd8-4193-45d7-a300-dcb2420e7cc3 | | project_id | 745d33000ac74d30a77539f8920555e7 | | project_id | 745d33000ac74d30a77539f8920555e7 | | revision_number | 4 | | security_groups | 59e2af18-93c6-4201-861b-19a8a8b79b23 | | status | DOWN | | updated_at | 2016-10-20T02:02:33Z | +-----------------------+-----------------------------------------------------------------------------+",
"openstack network trunk create --parent-port parent-trunk-port parent-trunk +-----------------+--------------------------------------+ | Field | Value | +-----------------+--------------------------------------+ | admin_state_up | UP | | created_at | 2016-10-20T02:05:17Z | | description | | | id | 0e4263e2-5761-4cf6-ab6d-b22884a0fa88 | | name | parent-trunk | | port_id | 20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 | | revision_number | 1 | | status | DOWN | | sub_ports | | | tenant_id | 745d33000ac74d30a77539f8920555e7 | | updated_at | 2016-10-20T02:05:17Z | +-----------------+--------------------------------------+",
"openstack network trunk list +--------------------------------------+--------------+--------------------------------------+-------------+ | ID | Name | Parent Port | Description | +--------------------------------------+--------------+--------------------------------------+-------------+ | 0e4263e2-5761-4cf6-ab6d-b22884a0fa88 | parent-trunk | 20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 | | +--------------------------------------+--------------+--------------------------------------+-------------+",
"openstack network trunk show parent-trunk +-----------------+--------------------------------------+ | Field | Value | +-----------------+--------------------------------------+ | admin_state_up | UP | | created_at | 2016-10-20T02:05:17Z | | description | | | id | 0e4263e2-5761-4cf6-ab6d-b22884a0fa88 | | name | parent-trunk | | port_id | 20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 | | revision_number | 1 | | status | DOWN | | sub_ports | | | tenant_id | 745d33000ac74d30a77539f8920555e7 | | updated_at | 2016-10-20T02:05:17Z | +-----------------+--------------------------------------+",
"openstack port create --network private --mac-address fa:16:3e:33:c4:75 subport-trunk-port +-----------------------+--------------------------------------------------------------------------+ | Field | Value | +-----------------------+--------------------------------------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | | | binding_profile | | | binding_vif_details | | | binding_vif_type | unbound | | binding_vnic_type | normal | | created_at | 2016-10-20T02:08:14Z | | description | | | device_id | | | device_owner | | | extra_dhcp_opts | | | fixed_ips | ip_address='10.0.0.11', subnet_id='1a299780-56df-4c0b-a4c0-c5a612cef2e8' | | headers | | | id | 479d742e-dd00-4c24-8dd6-b7297fab3ee9 | | mac_address | fa:16:3e:33:c4:75 | | name | subport-trunk-port | | network_id | 3fe6b758-8613-4b17-901e-9ba30a7c4b51 | | project_id | 745d33000ac74d30a77539f8920555e7 | | project_id | 745d33000ac74d30a77539f8920555e7 | | revision_number | 4 | | security_groups | 59e2af18-93c6-4201-861b-19a8a8b79b23 | | status | DOWN | | updated_at | 2016-10-20T02:08:15Z | +-----------------------+--------------------------------------------------------------------------+",
"openstack network trunk set --subport port=subport-trunk-port,segmentation-type=vlan,segmentation-id=55 parent-trunk",
"openstack network trunk list",
"+---------------------+--------------+---------------------+-------------+ | ID | Name | Parent Port | Description | +---------------------+--------------+---------------------+-------------+ | 0e4263e2-5761-4cf6- | parent-trunk | 20b6fdf8-0d43-475a- | | | ab6d-b22884a0fa88 | | a0f1-ec8f757a4a39 | | +---------------------+--------------+---------------------+-------------+",
"openstack network trunk show parent-trunk",
"+-----------------+------------------------------------------------------+ | Field | Value | +-----------------+------------------------------------------------------+ | admin_state_up | UP | | created_at | 2021-10-20T02:05:17Z | | description | | | id | 0e4263e2-5761-4cf6-ab6d-b22884a0fa88 | | name | parent-trunk | | port_id | 20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 | | revision_number | 2 | | status | DOWN | | sub_ports | port_id='479d742e-dd00-4c24-8dd6-b7297fab3ee9', segm | | | entation_id='55', segmentation_type='vlan' | | tenant_id | 745d33000ac74d30a77539f8920555e7 | | updated_at | 2021-08-20T02:10:06Z | +-----------------+------------------------------------------------------+",
"openstack server create --image cirros --flavor m1.tiny --security-group default --key-name sshaccess --nic port-id=20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 testInstance",
"+--------------------------------------+---------------------------------+ | Property | Value | +--------------------------------------+---------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hostname | testinstance | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | | | OS-EXT-SRV-ATTR:kernel_id | | | OS-EXT-SRV-ATTR:launch_index | 0 | | OS-EXT-SRV-ATTR:ramdisk_id | | | OS-EXT-SRV-ATTR:reservation_id | r-juqco0el | | OS-EXT-SRV-ATTR:root_device_name | - | | OS-EXT-SRV-ATTR:user_data | - | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | uMyL8PnZRBwQ | | config_drive | | | created | 2021-08-20T03:02:51Z | | description | - | | flavor | m1.tiny (1) | | hostId | | | host_status | | | id | 88b7aede-1305-4d91-a180-67e7eac | | | 8b70d | | image | cirros (568372f7-15df-4e61-a05f | | | -10954f79a3c4) | | key_name | sshaccess | | locked | False | | metadata | {} | | name | testInstance | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tags | [] | | tenant_id | 745d33000ac74d30a77539f8920555e | | | 7 | | updated | 2021-08-20T03:02:51Z | | user_id | 8c4aea738d774967b4ef388eb41fef5 | | | e | +--------------------------------------+---------------------------------+",
"vi /home/stack/templates/my-modules-environment.yaml",
"parameter_defaults: ExtraConfig: neutron::rpc_response_timeout: 120",
"openstack overcloud deploy --templates -e [your-environment-files] -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-modules-environment.yaml"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/networking_guide/vlan-aware-instances_rhosp-network
|
Chapter 3. Installing a cluster quickly on GCP
|
Chapter 3. Installing a cluster quickly on GCP In OpenShift Container Platform version 4.15, you can install a cluster on Google Cloud Platform (GCP) that uses the default configuration options. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.5. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your host, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. If you provide a name that is longer than 6 characters, only the first 6 characters will be used in the infrastructure ID that is generated from the cluster name. Paste the pull secret from Red Hat OpenShift Cluster Manager . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.6. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.7. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.8. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.9. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
|
[
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_gcp/installing-gcp-default
|
Deploying Ansible Automation Platform 2 on Red Hat OpenShift
|
Deploying Ansible Automation Platform 2 on Red Hat OpenShift Red Hat Ansible Automation Platform 2.4 Roger Lopez [email protected]
| null |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/deploying_ansible_automation_platform_2_on_red_hat_openshift/index
|
13.4. Configuration examples
|
13.4. Configuration examples The following examples provide real-world demonstrations of how SELinux complements the Apache HTTP Server and how full function of the Apache HTTP Server can be maintained. 13.4.1. Running a static site To create a static website, label the .html files for that website with the httpd_sys_content_t type. By default, the Apache HTTP Server cannot write to files that are labeled with the httpd_sys_content_t type. The following example creates a new directory to store files for a read-only website: Use the mkdir utility as root to create a top-level directory: As root, create a /mywebsite/index.html file. Copy and paste the following content into /mywebsite/index.html : To allow the Apache HTTP Server read only access to /mywebsite/ , as well as files and subdirectories under it, label the directory with the httpd_sys_content_t type. Enter the following command as root to add the label change to file-context configuration: Use the restorecon utility as root to make the label changes: For this example, edit the /etc/httpd/conf/httpd.conf file as root. Comment out the existing DocumentRoot option. Add a DocumentRoot "/mywebsite" option. After editing, these options should look as follows: Enter the following command as root to see the status of the Apache HTTP Server. If the server is stopped, start it: If the server is running, restart the service by executing the following command as root (this also applies any changes made to httpd.conf ): Use a web browser to navigate to http://localhost/index.html . The following is displayed: 13.4.2. Sharing NFS and CIFS volumes By default, NFS mounts on the client side are labeled with a default context defined by policy for NFS volumes. In common policies, this default context uses the nfs_t type. Also, by default, Samba shares mounted on the client side are labeled with a default context defined by policy. In common policies, this default context uses the cifs_t type. Depending on policy configuration, services may not be able to read files labeled with the nfs_t or cifs_t types. This may prevent file systems labeled with these types from being mounted and then read or exported by other services. Booleans can be enabled or disabled to control which services are allowed to access the nfs_t and cifs_t types. Enable the httpd_use_nfs Boolean to allow httpd to access and share NFS volumes (labeled with the nfs_t type): Enable the httpd_use_cifs Boolean to allow httpd to access and share CIFS volumes (labeled with the cifs_t type): Note Do not use the -P option if you do not want setsebool changes to persist across reboots. 13.4.3. Sharing files between services Type Enforcement helps prevent processes from accessing files intended for use by another process. For example, by default, Samba cannot read files labeled with the httpd_sys_content_t type, which are intended for use by the Apache HTTP Server. Files can be shared between the Apache HTTP Server, FTP, rsync, and Samba, if the required files are labeled with the public_content_t or public_content_rw_t type. The following example creates a directory and files, and allows that directory and files to be shared (read only) through the Apache HTTP Server, FTP, rsync, and Samba: Use the mkdir utility as root to create a new top-level directory to share files between multiple services: Files and directories that do not match a pattern in file-context configuration may be labeled with the default_t type. This type is inaccessible to confined services: As root, create a /shares/index.html file. Copy and paste the following content into /shares/index.html : Labeling /shares/ with the public_content_t type allows read-only access by the Apache HTTP Server, FTP, rsync, and Samba. Enter the following command as root to add the label change to file-context configuration: Use the restorecon utility as root to apply the label changes: To share /shares/ through Samba: Confirm the samba , samba-common , and samba-client packages are installed (version numbers may differ): If any of these packages are not installed, install them by running the following command as root: Edit the /etc/samba/smb.conf file as root. Add the following entry to the bottom of this file to share the /shares/ directory through Samba: A Samba account is required to mount a Samba file system. Enter the following command as root to create a Samba account, where username is an existing Linux user. For example, smbpasswd -a testuser creates a Samba account for the Linux testuser user: If you run the above command, specifying a user name of an account that does not exist on the system, it causes a Cannot locate Unix account for ' username '! error. Start the Samba service: Enter the following command to list the available shares, where username is the Samba account added in step 3. When prompted for a password, enter the password assigned to the Samba account in step 3 (version numbers may differ): User the mkdir utility to create a new directory. This directory will be used to mount the shares Samba share: Enter the following command as root to mount the shares Samba share to /test/ , replacing username with the user name from step 3: Enter the password for username , which was configured in step 3. View the content of the file, which is being shared through Samba: To share /shares/ through the Apache HTTP Server: Confirm the httpd package is installed (version number may differ): If this package is not installed, use the yum utility as root to install it: Change into the /var/www/html/ directory. Enter the following command as root to create a link (named shares ) to the /shares/ directory: Start the Apache HTTP Server: Use a web browser to navigate to http://localhost/shares . The /shares/index.html file is displayed. By default, the Apache HTTP Server reads an index.html file if it exists. If /shares/ did not have index.html , and instead had file1 , file2 , and file3 , a directory listing would occur when accessing http://localhost/shares : Remove the index.html file: Use the touch utility as root to create three files in /shares/ : Enter the following command as root to see the status of the Apache HTTP Server: If the server is stopped, start it: Use a web browser to navigate to http://localhost/shares . A directory listing is displayed: 13.4.4. Changing port numbers Depending on policy configuration, services may only be allowed to run on certain port numbers. Attempting to change the port a service runs on without changing policy may result in the service failing to start. Use the semanage utility as the root user to list the ports SELinux allows httpd to listen on: By default, SELinux allows httpd to listen on TCP ports 80, 443, 488, 8008, 8009, or 8443. If /etc/httpd/conf/httpd.conf is configured so that httpd listens on any port not listed for http_port_t , httpd fails to start. To configure httpd to run on a port other than TCP ports 80, 443, 488, 8008, 8009, or 8443: Edit the /etc/httpd/conf/httpd.conf file as root so the Listen option lists a port that is not configured in SELinux policy for httpd . The following example configures httpd to listen on the 10.0.0.1 IP address, and on TCP port 12345: Enter the following command as the root user to add the port to SELinux policy configuration: Confirm that the port is added: If you no longer run httpd on port 12345, use the semanage utility as root to remove the port from policy configuration:
|
[
"~]# mkdir /mywebsite",
"<html> <h2>index.html from /mywebsite/</h2> </html>",
"~]# semanage fcontext -a -t httpd_sys_content_t \"/mywebsite(/.*)?\"",
"~]# restorecon -R -v /mywebsite restorecon reset /mywebsite context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0 restorecon reset /mywebsite/index.html context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0",
"#DocumentRoot \"/var/www/html\" DocumentRoot \"/mywebsite\"",
"~]# systemctl status httpd.service httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled) Active: inactive (dead)",
"~]# systemctl start httpd.service",
"~]# systemctl status httpd.service httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled) Active: active (running) since Wed 2014-02-05 13:16:46 CET; 2s ago",
"~]# systemctl restart httpd.service",
"index.html from /mywebsite/",
"~]# setsebool -P httpd_use_nfs on",
"~]# setsebool -P httpd_use_cifs on",
"~]# mkdir /shares",
"~]USD ls -dZ /shares drwxr-xr-x root root unconfined_u:object_r:default_t:s0 /shares",
"<html> <body> <p>Hello</p> </body> </html>",
"~]# semanage fcontext -a -t public_content_t \"/shares(/.*)?\"",
"~]# restorecon -R -v /shares/ restorecon reset /shares context unconfined_u:object_r:default_t:s0->system_u:object_r:public_content_t:s0 restorecon reset /shares/index.html context unconfined_u:object_r:default_t:s0->system_u:object_r:public_content_t:s0",
"~]USD rpm -q samba samba-common samba-client samba-3.4.0-0.41.el6.3.i686 samba-common-3.4.0-0.41.el6.3.i686 samba-client-3.4.0-0.41.el6.3.i686",
"~]# yum install package-name",
"[shares] comment = Documents for Apache HTTP Server, FTP, rsync, and Samba path = /shares public = yes writable = no",
"~]# smbpasswd -a testuser New SMB password: Enter a password Retype new SMB password: Enter the same password again Added user testuser.",
"~]# systemctl start smb.service",
"~]USD smbclient -U username -L localhost Enter username 's password: Domain=[ HOSTNAME ] OS=[Unix] Server=[Samba 3.4.0-0.41.el6] Sharename Type Comment --------- ---- ------- shares Disk Documents for Apache HTTP Server, FTP, rsync, and Samba IPCUSD IPC IPC Service (Samba Server Version 3.4.0-0.41.el6) username Disk Home Directories Domain=[ HOSTNAME ] OS=[Unix] Server=[Samba 3.4.0-0.41.el6] Server Comment --------- ------- Workgroup Master --------- -------",
"~]# mkdir /test/",
"~]# mount //localhost/shares /test/ -o user= username",
"~]USD cat /test/index.html <html> <body> <p>Hello</p> </body> </html>",
"~]USD rpm -q httpd httpd-2.2.11-6.i386",
"~]# yum install httpd",
"html]# ln -s /shares/ shares",
"~]# systemctl start httpd.service",
"~]# rm -i /shares/index.html",
"~]# touch /shares/file{1,2,3} ~]# ls -Z /shares/ -rw-r--r-- root root system_u:object_r:public_content_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:public_content_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:public_content_t:s0 file3",
"~]# systemctl status httpd.service httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled) Active: inactive (dead)",
"~]# systemctl start httpd.service",
"~]# semanage port -l | grep -w http_port_t http_port_t tcp 80, 443, 488, 8008, 8009, 8443",
"Change this to Listen on specific IP addresses as shown below to prevent Apache from glomming onto all bound IP addresses (0.0.0.0) # #Listen 12.34.56.78:80 Listen 10.0.0.1:12345",
"~]# semanage port -a -t http_port_t -p tcp 12345",
"~]# semanage port -l | grep -w http_port_t http_port_t tcp 12345, 80, 443, 488, 8008, 8009, 8443",
"~]# semanage port -d -t http_port_t -p tcp 12345"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-the_apache_http_server-configuration_examples
|
Preface
|
Preface As an OpenShift AI administrator, you can manage the following resources: Cluster PVC size Cluster storage classes OpenShift AI admin and user groups Custom workbench images Jupyter notebook servers You can also specify whether to allow Red Hat to collect data about OpenShift AI usage in your cluster.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/managing_resources/pr01
|
Chapter 1. Migrating Red Hat Single Sign-On 7.6 to Red Hat build of Keycloak
|
Chapter 1. Migrating Red Hat Single Sign-On 7.6 to Red Hat build of Keycloak The purpose of this guide is to document the steps that are required to successfully migrate Red Hat Single Sign-On 7.6 to Red Hat build of Keycloak 22.0. The instructions address migration of the following elements: Red Hat Single Sign-On 7.6 server Operator deployments on OpenShift Template deployments on OpenShift Applications secured by Red Hat Single Sign-On 7.6 Custom providers Custom themes This guide also includes guidelines for migrating upstream Keycloak to Red Hat build of Keycloak 22.0. Before you start the migration, you might consider installing a new instance of Red Hat build of Keycloak to become familiar with the changes for this release. See the Red Hat build of Keycloak Getting Started Guide .
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/migration_guide/migrating_red_hat_single_sign_on_7_6_to_red_hat_build_of_keycloak
|
8.73. iproute
|
8.73. iproute 8.73.1. RHBA-2013:1697 - iproute bug fix and enhancement update Updated iproute packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The iproute packages contain networking utilities (for example, ip and rtmon ), which are designed to use the advanced networking capabilities of the Linux kernel. Bug Fixes BZ# 1011148 While monitoring IP neighbor cache with the ip monitor neigh command, the cache experienced the layer 2 network miss. Consequently, ip monitor neigh command could not decode the miss event generated by the kernel. To fix this bug, code for neighbor cache events for entry deletion and entry miss have been back-ported from upstream and ip monitor neigh now recognizes cache miss event and format it properly with a miss keyword on the output. BZ# 950400 Previously, Red Hat Enterprise Linux 6 was missing a functionality to set up IPv6 token-only network configuration. As a consequence, the user had fewer networking options. The IPv6 token feature has been implemented in both kernel (BZ# 876634 ) and a userspace interface to iproute . Users can now setup IPv6 token-only networking, optionaly receiving network prefixes later. BZ# 908155 Red Hat Enterprise Linux 6.5 shipped with VXLAN (Virtual Extended LAN), a VLAN-like layer 3 encapsulation technique support in the kernel, so a userspace interface was required for users and applications to utilize the VXLAN feature. With this update, the ip utility recognizes and supports the 'vxlan' devices. BZ# 838482 When larger rto_min (the minimum TCP Retransmission TimeOut to use when communicating with a certain destination) was set, the ip route show command did not return correct values. A patch has been provided to fix this bug and ip route show now handles rto_min as expected. BZ# 974694 Prior to this update, the manual page for the lnstat utility was referring wrongly to non-existent directory, the iproute-doc instead of iproute-<package version> directory. The incorrect documentation could confuse the user. To fix this bug, the file-system path has been corrected. BZ# 977845 Previously, there was an inconsistency between the lnstat utility's interval option behavior and its documentation. Consequently, lnstat exited after a number of seconds instead of refreshing the view, making the interval option useless. The interval option behavior has been changed to refresh the data every N seconds, thus fixing the bug. BZ# 985526 Previously, the ip utility was mishandling netlink communication, which could cause hangs under certain circumstances. Consequently, listing network devices with the ip link show command hung in a SELinux restricted mode. With this update, the ip utility checks for the result of the rtnl_send() function before waiting for a reply, avoiding an indefinite hang. As a result, it is now possible to list network devices in a SELinux restricted environment. BZ# 950122 Prior to this update, the tc utility documentation lacked description of the batch option. To fix this bug, the tc manual pages have been updated including the description of the batch option. Enhancements BZ# 885977 Previously, the bridge module sysfs system did not provide the ability to inspect the non-configuration IP multicast Internet Group Management Protocol ( IGMP ) snooping data. Without this functionality, users could not fully analyze their multicast traffic. With this update, users are able to list detected multicast router ports, groups with active subscribers and the associated interfaces. BZ# 929313 Distributed Overlay Virtual Ethernet ( DOVE ) tunnels allow for building of Virtual Extensible Local Area Network ( VXLAN ), which represents a scalable solution for ISO OSI layer 2 networks used in cloud centers. The bridge tool is part of the iproute packages and can be used, for example, to manage forwarding database on WLAN devices on Linux platform. BZ# 851371 If the tc utility is instrumented from a pipe, there is no way how to recognize when a subcommand has been completed. A new OK option has been added to the tc utility. Now, tc in the batch mode accepts commands in standard input (the tc -OK -force -batch command) and returns OK on a new line on standard output for each successfully completed tc subcommand. Users of iproute are advised to upgrade to these updated packages, which fixe these bugs and add these enhancements.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/iproute
|
Chapter 5. Using quotas and limit ranges
|
Chapter 5. Using quotas and limit ranges A resource quota, defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per project. It can limit the quantity of objects that can be created in a project by type, as well as the total amount of compute resources and storage that may be consumed by resources in that project. Using quotas and limit ranges, cluster administrators can set constraints to limit the number of objects or amount of compute resources that are used in your project. This helps cluster administrators better manage and allocate resources across all projects, and ensure that no projects are using more than is appropriate for the cluster size. Important Quotas are set by cluster administrators and are scoped to a given project. OpenShift Container Platform project owners can change quotas for their project, but not limit ranges. OpenShift Container Platform users cannot modify quotas or limit ranges. The following sections help you understand how to check on your quota and limit range settings, what sorts of things they can constrain, and how you can request or limit compute resources in your own pods and containers. 5.1. Resources managed by quota A resource quota, defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per project. It can limit the quantity of objects that can be created in a project by type, as well as the total amount of compute resources and storage that may be consumed by resources in that project. The following describes the set of compute resources and object types that may be managed by a quota. Note A pod is in a terminal state if status.phase is Failed or Succeeded . Table 5.1. Compute resources managed by quota Resource Name Description cpu The sum of CPU requests across all pods in a non-terminal state cannot exceed this value. cpu and requests.cpu are the same value and can be used interchangeably. memory The sum of memory requests across all pods in a non-terminal state cannot exceed this value. memory and requests.memory are the same value and can be used interchangeably. ephemeral-storage The sum of local ephemeral storage requests across all pods in a non-terminal state cannot exceed this value. ephemeral-storage and requests.ephemeral-storage are the same value and can be used interchangeably. This resource is available only if you enabled the ephemeral storage technology preview. This feature is disabled by default. requests.cpu The sum of CPU requests across all pods in a non-terminal state cannot exceed this value. cpu and requests.cpu are the same value and can be used interchangeably. requests.memory The sum of memory requests across all pods in a non-terminal state cannot exceed this value. memory and requests.memory are the same value and can be used interchangeably. requests.ephemeral-storage The sum of ephemeral storage requests across all pods in a non-terminal state cannot exceed this value. ephemeral-storage and requests.ephemeral-storage are the same value and can be used interchangeably. This resource is available only if you enabled the ephemeral storage technology preview. This feature is disabled by default. limits.cpu The sum of CPU limits across all pods in a non-terminal state cannot exceed this value. limits.memory The sum of memory limits across all pods in a non-terminal state cannot exceed this value. limits.ephemeral-storage The sum of ephemeral storage limits across all pods in a non-terminal state cannot exceed this value. This resource is available only if you enabled the ephemeral storage technology preview. This feature is disabled by default. Table 5.2. Storage resources managed by quota Resource Name Description requests.storage The sum of storage requests across all persistent volume claims in any state cannot exceed this value. persistentvolumeclaims The total number of persistent volume claims that can exist in the project. <storage-class-name>.storageclass.storage.k8s.io/requests.storage The sum of storage requests across all persistent volume claims in any state that have a matching storage class, cannot exceed this value. <storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims The total number of persistent volume claims with a matching storage class that can exist in the project. Table 5.3. Object counts managed by quota Resource Name Description pods The total number of pods in a non-terminal state that can exist in the project. replicationcontrollers The total number of replication controllers that can exist in the project. resourcequotas The total number of resource quotas that can exist in the project. services The total number of services that can exist in the project. secrets The total number of secrets that can exist in the project. configmaps The total number of ConfigMap objects that can exist in the project. persistentvolumeclaims The total number of persistent volume claims that can exist in the project. openshift.io/imagestreams The total number of image streams that can exist in the project. You can configure an object count quota for these standard namespaced resource types using the count/<resource>.<group> syntax. USD oc create quota <name> --hard=count/<resource>.<group>=<quota> 1 1 <resource> is the name of the resource, and <group> is the API group, if applicable. Use the kubectl api-resources command for a list of resources and their associated API groups. 5.1.1. Setting resource quota for extended resources Overcommitment of resources is not allowed for extended resources, so you must specify requests and limits for the same extended resource in a quota. Currently, only quota items with the prefix requests. are allowed for extended resources. The following is an example scenario of how to set resource quota for the GPU resource nvidia.com/gpu . Procedure To determine how many GPUs are available on a node in your cluster, use the following command: USD oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu' Example output openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu: 0 0 In this example, 2 GPUs are available. Use this command to set a quota in the namespace nvidia . In this example, the quota is 1 : USD cat gpu-quota.yaml Example output apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1 Create the quota with the following command: USD oc create -f gpu-quota.yaml Example output resourcequota/gpu-quota created Verify that the namespace has the correct quota set using the following command: USD oc describe quota gpu-quota -n nvidia Example output Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1 Run a pod that asks for a single GPU with the following command: USD oc create pod gpu-pod.yaml Example output apiVersion: v1 kind: Pod metadata: generateName: gpu-pod-s46h7 namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: "compute,utility" - name: NVIDIA_REQUIRE_CUDA value: "cuda>=5.0" command: ["sleep"] args: ["infinity"] resources: limits: nvidia.com/gpu: 1 Verify that the pod is running bwith the following command: USD oc get pods Example output NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m Verify that the quota Used counter is correct by running the following command: USD oc describe quota gpu-quota -n nvidia Example output Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1 Using the following command, attempt to create a second GPU pod in the nvidia namespace. This is technically available on the node because it has 2 GPUs: USD oc create -f gpu-pod.yaml Example output Error from server (Forbidden): error when creating "gpu-pod.yaml": pods "gpu-pod-f7z2w" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1 This Forbidden error message occurs because you have a quota of 1 GPU and this pod tried to allocate a second GPU, which exceeds its quota. 5.1.2. Quota scopes Each quota can have an associated set of scopes . A quota only measures usage for a resource if it matches the intersection of enumerated scopes. Adding a scope to a quota restricts the set of resources to which that quota can apply. Specifying a resource outside of the allowed set results in a validation error. Scope Description Terminating Match pods where spec.activeDeadlineSeconds >= 0 . NotTerminating Match pods where spec.activeDeadlineSeconds is nil . BestEffort Match pods that have best effort quality of service for either cpu or memory . otBestEffort Match pods that do not have best effort quality of service for cpu and memory . A BestEffort scope restricts a quota to limiting the following resources: pods A Terminating , NotTerminating , and NotBestEffort scope restricts a quota to tracking the following resources: pods memory requests.memory limits.memory cpu requests.cpu limits.cpu ephemeral-storage requests.ephemeral-storage limits.ephemeral-storage Note Ephemeral storage requests and limits apply only if you enabled the ephemeral storage technology preview. This feature is disabled by default. Additional resources See Resources managed by quotas for more on compute resources. See Quality of Service Classes for more on committing compute resources. 5.2. Admin quota usage 5.2.1. Quota enforcement After a resource quota for a project is first created, the project restricts the ability to create any new resources that can violate a quota constraint until it has calculated updated usage statistics. After a quota is created and usage statistics are updated, the project accepts the creation of new content. When you create or modify resources, your quota usage is incremented immediately upon the request to create or modify the resource. When you delete a resource, your quota use is decremented during the full recalculation of quota statistics for the project. A configurable amount of time determines how long it takes to reduce quota usage statistics to their current observed system value. If project modifications exceed a quota usage limit, the server denies the action, and an appropriate error message is returned to the user explaining the quota constraint violated, and what their currently observed usage stats are in the system. 5.2.2. Requests compared to limits When allocating compute resources by quota, each container can specify a request and a limit value each for CPU, memory, and ephemeral storage. Quotas can restrict any of these values. If the quota has a value specified for requests.cpu or requests.memory , then it requires that every incoming container make an explicit request for those resources. If the quota has a value specified for limits.cpu or limits.memory , then it requires that every incoming container specify an explicit limit for those resources. 5.2.3. Sample resource quota definitions Example core-object-counts.yaml apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: "10" 1 persistentvolumeclaims: "4" 2 replicationcontrollers: "20" 3 secrets: "10" 4 services: "10" 5 1 The total number of ConfigMap objects that can exist in the project. 2 The total number of persistent volume claims (PVCs) that can exist in the project. 3 The total number of replication controllers that can exist in the project. 4 The total number of secrets that can exist in the project. 5 The total number of services that can exist in the project. Example openshift-object-counts.yaml apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: "10" 1 1 The total number of image streams that can exist in the project. Example compute-resources.yaml apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: "4" 1 requests.cpu: "1" 2 requests.memory: 1Gi 3 requests.ephemeral-storage: 2Gi 4 limits.cpu: "2" 5 limits.memory: 2Gi 6 limits.ephemeral-storage: 4Gi 7 1 The total number of pods in a non-terminal state that can exist in the project. 2 Across all pods in a non-terminal state, the sum of CPU requests cannot exceed 1 core. 3 Across all pods in a non-terminal state, the sum of memory requests cannot exceed 1Gi. 4 Across all pods in a non-terminal state, the sum of ephemeral storage requests cannot exceed 2Gi. 5 Across all pods in a non-terminal state, the sum of CPU limits cannot exceed 2 cores. 6 Across all pods in a non-terminal state, the sum of memory limits cannot exceed 2Gi. 7 Across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed 4Gi. Example besteffort.yaml apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: "1" 1 scopes: - BestEffort 2 1 The total number of pods in a non-terminal state with BestEffort quality of service that can exist in the project. 2 Restricts the quota to only matching pods that have BestEffort quality of service for either memory or CPU. Example compute-resources-long-running.yaml apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: "4" 1 limits.cpu: "4" 2 limits.memory: "2Gi" 3 limits.ephemeral-storage: "4Gi" 4 scopes: - NotTerminating 5 1 The total number of pods in a non-terminal state. 2 Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value. 3 Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value. 4 Across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed this value. 5 Restricts the quota to only matching pods where spec.activeDeadlineSeconds is set to nil . Build pods will fall under NotTerminating unless the RestartNever policy is applied. Example compute-resources-time-bound.yaml apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-time-bound spec: hard: pods: "2" 1 limits.cpu: "1" 2 limits.memory: "1Gi" 3 limits.ephemeral-storage: "1Gi" 4 scopes: - Terminating 5 1 The total number of pods in a non-terminal state. 2 Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value. 3 Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value. 4 Across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed this value. 5 Restricts the quota to only matching pods where spec.activeDeadlineSeconds >=0 . For example, this quota would charge for build pods, but not long running pods such as a web server or database. Example storage-consumption.yaml apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption spec: hard: persistentvolumeclaims: "10" 1 requests.storage: "50Gi" 2 gold.storageclass.storage.k8s.io/requests.storage: "10Gi" 3 silver.storageclass.storage.k8s.io/requests.storage: "20Gi" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: "5" 5 bronze.storageclass.storage.k8s.io/requests.storage: "0" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "0" 7 1 The total number of persistent volume claims in a project 2 Across all persistent volume claims in a project, the sum of storage requested cannot exceed this value. 3 Across all persistent volume claims in a project, the sum of storage requested in the gold storage class cannot exceed this value. 4 Across all persistent volume claims in a project, the sum of storage requested in the silver storage class cannot exceed this value. 5 Across all persistent volume claims in a project, the total number of claims in the silver storage class cannot exceed this value. 6 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to 0 , it means bronze storage class cannot request storage. 7 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to 0 , it means bronze storage class cannot create claims. 5.2.4. Creating a quota To create a quota, first define the quota in a file. Then use that file to apply it to a project. See the Additional resources section for a link describing this. USD oc create -f <resource_quota_definition> [-n <project_name>] Here is an example using the core-object-counts.yaml resource quota definition and the demoproject project name: USD oc create -f core-object-counts.yaml -n demoproject 5.2.5. Creating object count quotas You can create an object count quota for all OpenShift Container Platform standard namespaced resource types, such as BuildConfig , and DeploymentConfig . An object quota count places a defined quota on all standard namespaced resource types. When using a resource quota, an object is charged against the quota if it exists in server storage. These types of quotas are useful to protect against exhaustion of storage resources. To configure an object count quota for a resource, run the following command: USD oc create quota <name> --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota> Example showing object count quota: USD oc create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4 resourcequota "test" created USD oc describe quota test Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4 This example limits the listed resources to the hard limit in each project in the cluster. 5.2.6. Viewing a quota You can view usage statistics related to any hard limits defined in a project's quota by navigating in the web console to the project's Quota page. You can also use the CLI to view quota details: First, get the list of quotas defined in the project. For example, for a project called demoproject : USD oc get quota -n demoproject NAME AGE besteffort 11m compute-resources 2m core-object-counts 29m Describe the quota you are interested in, for example the core-object-counts quota: USD oc describe quota core-object-counts -n demoproject Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10 5.2.7. Configuring quota synchronization period When a set of resources are deleted, the synchronization time frame of resources is determined by the resource-quota-sync-period setting in the /etc/origin/master/master-config.yaml file. Before quota usage is restored, a user can encounter problems when attempting to reuse the resources. You can change the resource-quota-sync-period setting to have the set of resources regenerate in the needed amount of time (in seconds) for the resources to be once again available: Example resource-quota-sync-period setting kubernetesMasterConfig: apiLevels: - v1beta3 - v1 apiServerArguments: null controllerArguments: resource-quota-sync-period: - "10s" After making any changes, restart the controller services to apply them. USD master-restart api USD master-restart controllers Adjusting the regeneration time can be helpful for creating resources and determining resource usage when automation is used. Note The resource-quota-sync-period setting balances system performance. Reducing the sync period can result in a heavy load on the controller. 5.2.8. Explicit quota to consume a resource If a resource is not managed by quota, a user has no restriction on the amount of resource that can be consumed. For example, if there is no quota on storage related to the gold storage class, the amount of gold storage a project can create is unbounded. For high-cost compute or storage resources, administrators can require an explicit quota be granted to consume a resource. For example, if a project was not explicitly given quota for storage related to the gold storage class, users of that project would not be able to create any storage of that type. In order to require explicit quota to consume a particular resource, the following stanza should be added to the master-config.yaml. admissionConfig: pluginConfig: ResourceQuota: configuration: apiVersion: resourcequota.admission.k8s.io/v1alpha1 kind: Configuration limitedResources: - resource: persistentvolumeclaims 1 matchContains: - gold.storageclass.storage.k8s.io/requests.storage 2 1 The group or resource to whose consumption is limited by default. 2 The name of the resource tracked by quota associated with the group/resource to limit by default. In the above example, the quota system intercepts every operation that creates or updates a PersistentVolumeClaim . It checks what resources controlled by quota would be consumed. If there is no covering quota for those resources in the project, the request is denied. In this example, if a user creates a PersistentVolumeClaim that uses storage associated with the gold storage class and there is no matching quota in the project, the request is denied. Additional resources For examples of how to create the file needed to set quotas, see Resources managed by quotas . A description of how to allocate compute resources managed by quota . For information on managing limits and quota on project resources, see Working with projects . If a quota has been defined for your project, see Understanding deployments for considerations in cluster configurations. 5.3. Setting limit ranges A limit range, defined by a LimitRange object, defines compute resource constraints at the pod, container, image, image stream, and persistent volume claim level. The limit range specifies the amount of resources that a pod, container, image, image stream, or persistent volume claim can consume. All requests to create and modify resources are evaluated against each LimitRange object in the project. If the resource violates any of the enumerated constraints, the resource is rejected. If the resource does not set an explicit value, and if the constraint supports a default value, the default value is applied to the resource. For CPU and memory limits, if you specify a maximum value but do not specify a minimum limit, the resource can consume more CPU and memory resources than the maximum value. Core limit range object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "core-resource-limits" 1 spec: limits: - type: "Pod" max: cpu: "2" 2 memory: "1Gi" 3 min: cpu: "200m" 4 memory: "6Mi" 5 - type: "Container" max: cpu: "2" 6 memory: "1Gi" 7 min: cpu: "100m" 8 memory: "4Mi" 9 default: cpu: "300m" 10 memory: "200Mi" 11 defaultRequest: cpu: "200m" 12 memory: "100Mi" 13 maxLimitRequestRatio: cpu: "10" 14 1 The name of the limit range object. 2 The maximum amount of CPU that a pod can request on a node across all containers. 3 The maximum amount of memory that a pod can request on a node across all containers. 4 The minimum amount of CPU that a pod can request on a node across all containers. If you do not set a min value or you set min to 0 , the result is no limit and the pod can consume more than the max CPU value. 5 The minimum amount of memory that a pod can request on a node across all containers. If you do not set a min value or you set min to 0 , the result is no limit and the pod can consume more than the max memory value. 6 The maximum amount of CPU that a single container in a pod can request. 7 The maximum amount of memory that a single container in a pod can request. 8 The minimum amount of CPU that a single container in a pod can request. If you do not set a min value or you set min to 0 , the result is no limit and the pod can consume more than the max CPU value. 9 The minimum amount of memory that a single container in a pod can request. If you do not set a min value or you set min to 0 , the result is no limit and the pod can consume more than the max memory value. 10 The default CPU limit for a container if you do not specify a limit in the pod specification. 11 The default memory limit for a container if you do not specify a limit in the pod specification. 12 The default CPU request for a container if you do not specify a request in the pod specification. 13 The default memory request for a container if you do not specify a request in the pod specification. 14 The maximum limit-to-request ratio for a container. OpenShift Container Platform Limit range object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "openshift-resource-limits" spec: limits: - type: openshift.io/Image max: storage: 1Gi 1 - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3 - type: "Pod" max: cpu: "2" 4 memory: "1Gi" 5 ephemeral-storage: "1Gi" 6 min: cpu: "1" 7 memory: "1Gi" 8 1 The maximum size of an image that can be pushed to an internal registry. 2 The maximum number of unique image tags as defined in the specification for the image stream. 3 The maximum number of unique image references as defined in the specification for the image stream status. 4 The maximum amount of CPU that a pod can request on a node across all containers. 5 The maximum amount of memory that a pod can request on a node across all containers. 6 The maximum amount of ephemeral storage that a pod can request on a node across all containers. 7 The minimum amount of CPU that a pod can request on a node across all containers. See the Supported Constraints table for important information. 8 The minimum amount of memory that a pod can request on a node across all containers. If you do not set a min value or you set min to 0 , the result` is no limit and the pod can consume more than the max memory value. You can specify both core and OpenShift Container Platform resources in one limit range object. 5.3.1. Container limits Supported Resources: CPU Memory Supported Constraints Per container, the following must hold true if specified: Container Constraint Behavior Min Min[<resource>] less than or equal to container.resources.requests[<resource>] (required) less than or equal to container/resources.limits[<resource>] (optional) If the configuration defines a min CPU, the request value must be greater than the CPU value. If you do not set a min value or you set min to 0 , the result is no limit and the pod can consume more of the resource than the max value. Max container.resources.limits[<resource>] (required) less than or equal to Max[<resource>] If the configuration defines a max CPU, you do not need to define a CPU request value. However, you must set a limit that satisfies the maximum CPU constraint that is specified in the limit range. MaxLimitRequestRatio MaxLimitRequestRatio[<resource>] less than or equal to ( container.resources.limits[<resource>] / container.resources.requests[<resource>] ) If the limit range defines a maxLimitRequestRatio constraint, any new containers must have both a request and a limit value. Additionally, OpenShift Container Platform calculates a limit-to-request ratio by dividing the limit by the request . The result should be an integer greater than 1. For example, if a container has cpu: 500 in the limit value, and cpu: 100 in the request value, the limit-to-request ratio for cpu is 5 . This ratio must be less than or equal to the maxLimitRequestRatio . Supported Defaults: Default[<resource>] Defaults container.resources.limit[<resource>] to specified value if none. Default Requests[<resource>] Defaults container.resources.requests[<resource>] to specified value if none. 5.3.2. Pod limits Supported Resources: CPU Memory Supported Constraints: Across all containers in a pod, the following must hold true: Table 5.4. Pod Constraint Enforced Behavior Min Min[<resource>] less than or equal to container.resources.requests[<resource>] (required) less than or equal to container.resources.limits[<resource>] . If you do not set a min value or you set min to 0 , the result is no limit and the pod can consume more of the resource than the max value. Max container.resources.limits[<resource>] (required) less than or equal to Max[<resource>] . MaxLimitRequestRatio MaxLimitRequestRatio[<resource>] less than or equal to ( container.resources.limits[<resource>] / container.resources.requests[<resource>] ). 5.3.3. Image limits Supported Resources: Storage Resource type name: openshift.io/Image Per image, the following must hold true if specified: Table 5.5. Image Constraint Behavior Max image.dockerimagemetadata.size less than or equal to Max[<resource>] Note To prevent blobs that exceed the limit from being uploaded to the registry, the registry must be configured to enforce quota. The REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ENFORCEQUOTA environment variable must be set to true . By default, the environment variable is set to true for new deployments. 5.3.4. Image stream limits Supported Resources: openshift.io/image-tags openshift.io/images Resource type name: openshift.io/ImageStream Per image stream, the following must hold true if specified: Table 5.6. ImageStream Constraint Behavior Max[openshift.io/image-tags] length( uniqueimagetags( imagestream.spec.tags ) ) less than or equal to Max[openshift.io/image-tags] uniqueimagetags returns unique references to images of given spec tags. Max[openshift.io/images] length( uniqueimages( imagestream.status.tags ) ) less than or equal to Max[openshift.io/images] uniqueimages returns unique image names found in status tags. The name is equal to the digest for the image. 5.3.5. Counting of image references The openshift.io/image-tags resource represents unique stream limits. Possible references are an ImageStreamTag , an ImageStreamImage , or a DockerImage . Tags can be created by using the oc tag and oc import-image commands or by using image streams. No distinction is made between internal and external references. However, each unique reference that is tagged in an image stream specification is counted just once. It does not restrict pushes to an internal container image registry in any way, but is useful for tag restriction. The openshift.io/images resource represents unique image names that are recorded in image stream status. It helps to restrict several images that can be pushed to the internal registry. Internal and external references are not distinguished. 5.3.6. PersistentVolumeClaim limits Supported Resources: Storage Supported Constraints: Across all persistent volume claims in a project, the following must hold true: Table 5.7. Pod Constraint Enforced Behavior Min Min[<resource>] <= claim.spec.resources.requests[<resource>] (required) Max claim.spec.resources.requests[<resource>] (required) <= Max[<resource>] Limit Range Object Definition { "apiVersion": "v1", "kind": "LimitRange", "metadata": { "name": "pvcs" 1 }, "spec": { "limits": [{ "type": "PersistentVolumeClaim", "min": { "storage": "2Gi" 2 }, "max": { "storage": "50Gi" 3 } } ] } } 1 The name of the limit range object. 2 The minimum amount of storage that can be requested in a persistent volume claim. 3 The maximum amount of storage that can be requested in a persistent volume claim. Additional resources For information on stream limits, see managing images streams . For information on stream limits . For more information on compute resource constraints . For more information on how CPU and memory are measured, see Recommended control plane practices . You can specify limits and requests for ephemeral storage. For more information on this feature, see Understanding ephemeral storage . 5.4. Limit range operations 5.4.1. Creating a limit range Shown here is an example procedure to follow for creating a limit range. Procedure Create the object: USD oc create -f <limit_range_file> -n <project> 5.4.2. View the limit You can view any limit ranges that are defined in a project by navigating in the web console to the Quota page for the project. You can also use the CLI to view limit range details by performing the following steps: Procedure Get the list of limit range objects that are defined in the project. For example, a project called demoproject : USD oc get limits -n demoproject Example Output NAME AGE resource-limits 6d Describe the limit range. For example, for a limit range called resource-limits : USD oc describe limits resource-limits -n demoproject Example Output Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - 5.4.3. Deleting a limit range To remove a limit range, run the following command: + USD oc delete limits <limit_name> S Additional resources For information about enforcing different limits on the number of projects that your users can create, managing limits, and quota on project resources, see Resource quotas per projects .
|
[
"oc create quota <name> --hard=count/<resource>.<group>=<quota> 1",
"oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu'",
"openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu: 0 0",
"cat gpu-quota.yaml",
"apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1",
"oc create -f gpu-quota.yaml",
"resourcequota/gpu-quota created",
"oc describe quota gpu-quota -n nvidia",
"Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1",
"oc create pod gpu-pod.yaml",
"apiVersion: v1 kind: Pod metadata: generateName: gpu-pod-s46h7 namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: \"compute,utility\" - name: NVIDIA_REQUIRE_CUDA value: \"cuda>=5.0\" command: [\"sleep\"] args: [\"infinity\"] resources: limits: nvidia.com/gpu: 1",
"oc get pods",
"NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m",
"oc describe quota gpu-quota -n nvidia",
"Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1",
"oc create -f gpu-pod.yaml",
"Error from server (Forbidden): error when creating \"gpu-pod.yaml\": pods \"gpu-pod-f7z2w\" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1",
"apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: \"10\" 1 persistentvolumeclaims: \"4\" 2 replicationcontrollers: \"20\" 3 secrets: \"10\" 4 services: \"10\" 5",
"apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: \"10\" 1",
"apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: \"4\" 1 requests.cpu: \"1\" 2 requests.memory: 1Gi 3 requests.ephemeral-storage: 2Gi 4 limits.cpu: \"2\" 5 limits.memory: 2Gi 6 limits.ephemeral-storage: 4Gi 7",
"apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: \"1\" 1 scopes: - BestEffort 2",
"apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: \"4\" 1 limits.cpu: \"4\" 2 limits.memory: \"2Gi\" 3 limits.ephemeral-storage: \"4Gi\" 4 scopes: - NotTerminating 5",
"apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-time-bound spec: hard: pods: \"2\" 1 limits.cpu: \"1\" 2 limits.memory: \"1Gi\" 3 limits.ephemeral-storage: \"1Gi\" 4 scopes: - Terminating 5",
"apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7",
"oc create -f <resource_quota_definition> [-n <project_name>]",
"oc create -f core-object-counts.yaml -n demoproject",
"oc create quota <name> --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota>",
"oc create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4 resourcequota \"test\" created oc describe quota test Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4",
"oc get quota -n demoproject NAME AGE besteffort 11m compute-resources 2m core-object-counts 29m",
"oc describe quota core-object-counts -n demoproject Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10",
"kubernetesMasterConfig: apiLevels: - v1beta3 - v1 apiServerArguments: null controllerArguments: resource-quota-sync-period: - \"10s\"",
"master-restart api master-restart controllers",
"admissionConfig: pluginConfig: ResourceQuota: configuration: apiVersion: resourcequota.admission.k8s.io/v1alpha1 kind: Configuration limitedResources: - resource: persistentvolumeclaims 1 matchContains: - gold.storageclass.storage.k8s.io/requests.storage 2",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"core-resource-limits\" 1 spec: limits: - type: \"Pod\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"200m\" 4 memory: \"6Mi\" 5 - type: \"Container\" max: cpu: \"2\" 6 memory: \"1Gi\" 7 min: cpu: \"100m\" 8 memory: \"4Mi\" 9 default: cpu: \"300m\" 10 memory: \"200Mi\" 11 defaultRequest: cpu: \"200m\" 12 memory: \"100Mi\" 13 maxLimitRequestRatio: cpu: \"10\" 14",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"openshift-resource-limits\" spec: limits: - type: openshift.io/Image max: storage: 1Gi 1 - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3 - type: \"Pod\" max: cpu: \"2\" 4 memory: \"1Gi\" 5 ephemeral-storage: \"1Gi\" 6 min: cpu: \"1\" 7 memory: \"1Gi\" 8",
"{ \"apiVersion\": \"v1\", \"kind\": \"LimitRange\", \"metadata\": { \"name\": \"pvcs\" 1 }, \"spec\": { \"limits\": [{ \"type\": \"PersistentVolumeClaim\", \"min\": { \"storage\": \"2Gi\" 2 }, \"max\": { \"storage\": \"50Gi\" 3 } } ] } }",
"oc create -f <limit_range_file> -n <project>",
"oc get limits -n demoproject",
"NAME AGE resource-limits 6d",
"oc describe limits resource-limits -n demoproject",
"Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - -",
"oc delete limits <limit_name>"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/scalability_and_performance/compute-resource-quotas
|
Chapter 14. Interoperability
|
Chapter 14. Interoperability This chapter discusses how to use AMQ Python in combination with other AMQ components. For an overview of the compatibility of AMQ components, see the product introduction . 14.1. Interoperating with other AMQP clients AMQP messages are composed using the AMQP type system . This common format is one of the reasons AMQP clients in different languages are able to interoperate with each other. When sending messages, AMQ Python automatically converts language-native types to AMQP-encoded data. When receiving messages, the reverse conversion takes place. Note More information about AMQP types is available at the interactive type reference maintained by the Apache Qpid project. Table 14.1. AMQP types AMQP type Description null An empty value boolean A true or false value char A single Unicode character string A sequence of Unicode characters binary A sequence of bytes byte A signed 8-bit integer short A signed 16-bit integer int A signed 32-bit integer long A signed 64-bit integer ubyte An unsigned 8-bit integer ushort An unsigned 16-bit integer uint An unsigned 32-bit integer ulong An unsigned 64-bit integer float A 32-bit floating point number double A 64-bit floating point number array A sequence of values of a single type list A sequence of values of variable type map A mapping from distinct keys to values uuid A universally unique identifier symbol A 7-bit ASCII string from a constrained domain timestamp An absolute point in time Table 14.2. AMQ Python types before encoding and after decoding AMQP type AMQ Python type before encoding AMQ Python type after decoding null None None boolean bool bool char proton.char unicode string unicode unicode binary bytes bytes byte proton.byte int short proton.short int int proton.int32 long long long long ubyte proton.ubyte long ushort proton.ushort long uint proton.uint long ulong proton.ulong long float proton.float32 float double float float array proton.Array proton.Array list list list map dict dict symbol proton.symbol str timestamp proton.timestamp long Table 14.3. AMQ Python and other AMQ client types (1 of 2) AMQ Python type before encoding AMQ C++ type AMQ JavaScript type None nullptr null bool bool boolean proton.char wchar_t number unicode std::string string bytes proton::binary string proton.byte int8_t number proton.short int16_t number proton.int32 int32_t number long int64_t number proton.ubyte uint8_t number proton.ushort uint16_t number proton.uint uint32_t number proton.ulong uint64_t number proton.float32 float number float double number proton.Array - Array list std::vector Array dict std::map object uuid.UUID proton::uuid number proton.symbol proton::symbol string proton.timestamp proton::timestamp number Table 14.4. AMQ Python and other AMQ client types (2 of 2) AMQ Python type before encoding AMQ .NET type AMQ Ruby type None null nil bool System.Boolean true, false proton.char System.Char String unicode System.String String bytes System.Byte[] String proton.byte System.SByte Integer proton.short System.Int16 Integer proton.int32 System.Int32 Integer long System.Int64 Integer proton.ubyte System.Byte Integer proton.ushort System.UInt16 Integer proton.uint System.UInt32 Integer proton.ulong System.UInt64 Integer proton.float32 System.Single Float float System.Double Float proton.Array - Array list Amqp.List Array dict Amqp.Map Hash uuid.UUID System.Guid - proton.symbol Amqp.Symbol Symbol proton.timestamp System.DateTime Time 14.2. Interoperating with AMQ JMS AMQP defines a standard mapping to the JMS messaging model. This section discusses the various aspects of that mapping. For more information, see the AMQ JMS Interoperability chapter. JMS message types AMQ Python provides a single message type whose body type can vary. By contrast, the JMS API uses different message types to represent different kinds of data. The table below indicates how particular body types map to JMS message types. For more explicit control of the resulting JMS message type, you can set the x-opt-jms-msg-type message annotation. See the AMQ JMS Interoperability chapter for more information. Table 14.5. AMQ Python and JMS message types AMQ Python body type JMS message type unicode TextMessage None TextMessage bytes BytesMessage Any other type ObjectMessage 14.3. Connecting to AMQ Broker AMQ Broker is designed to interoperate with AMQP 1.0 clients. Check the following to ensure the broker is configured for AMQP messaging: Port 5672 in the network firewall is open. The AMQ Broker AMQP acceptor is enabled. See Default acceptor settings . The necessary addresses are configured on the broker. See Addresses, Queues, and Topics . The broker is configured to permit access from your client, and the client is configured to send the required credentials. See Broker Security . 14.4. Connecting to AMQ Interconnect AMQ Interconnect works with any AMQP 1.0 client. Check the following to ensure the components are configured correctly: Port 5672 in the network firewall is open. The router is configured to permit access from your client, and the client is configured to send the required credentials. See Securing network connections .
| null |
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_python_client/interoperability
|
Chapter 7. Troubleshooting alerts and errors in OpenShift Data Foundation
|
Chapter 7. Troubleshooting alerts and errors in OpenShift Data Foundation 7.1. Resolving alerts and errors Red Hat OpenShift Data Foundation can detect and automatically resolve a number of common failure scenarios. However, some problems require administrator intervention. To know the errors currently firing, check one of the following locations: Observe Alerting Firing option Home Overview Cluster tab Storage Data Foundation Storage System storage system link in the pop up Overview Block and File tab Storage Data Foundation Storage System storage system link in the pop up Overview Object tab Copy the error displayed and search it in the following section to know its severity and resolution: Name : CephMonVersionMismatch Message : There are multiple versions of storage services running. Description : There are {{ USDvalue }} different versions of Ceph Mon components running. Severity : Warning Resolution : Fix Procedure : Inspect the user interface and log, and verify if an update is in progress. If an update is in progress, this alert is temporary. If an update is not in progress, restart the upgrade process. Name : CephOSDVersionMismatch Message : There are multiple versions of storage services running. Description : There are {{ USDvalue }} different versions of Ceph OSD components running. Severity : Warning Resolution : Fix Procedure : Inspect the user interface and log, and verify if an update is in progress. If an update is in progress, this alert is temporary. If an update is not in progress, restart the upgrade process. Name : CephClusterCriticallyFull Message : Storage cluster is critically full and needs immediate expansion Description : Storage cluster utilization has crossed 85%. Severity : Crtical Resolution : Fix Procedure : Remove unnecessary data or expand the cluster. Name : CephClusterNearFull Fixed : Storage cluster is nearing full. Expansion is required. Description : Storage cluster utilization has crossed 75%. Severity : Warning Resolution : Fix Procedure : Remove unnecessary data or expand the cluster. Name : NooBaaBucketErrorState Message : A NooBaa Bucket Is In Error State Description : A NooBaa bucket {{ USDlabels.bucket_name }} is in error state for more than 6m Severity : Warning Resolution : Workaround Procedure : Finding the error code of an unhealthy bucket Name : NooBaaNamespaceResourceErrorState Message : A NooBaa Namespace Resource Is In Error State Description : A NooBaa namespace resource {{ USDlabels.namespace_resource_name }} is in error state for more than 5m Severity : Warning Resolution : Fix Procedure : Finding the error code of an unhealthy namespace store resource Name : NooBaaNamespaceBucketErrorState Message : A NooBaa Namespace Bucket Is In Error State Description : A NooBaa namespace bucket {{ USDlabels.bucket_name }} is in error state for more than 5m Severity : Warning Resolution : Fix Procedure : Finding the error code of an unhealthy bucket Name : CephMdsMissingReplicas Message : Insufficient replicas for storage metadata service. Description : `Minimum required replicas for storage metadata service not available. Might affect the working of storage cluster.` Severity : Warning Resolution : Contact Red Hat support Procedure : Check for alerts and operator status. If the issue cannot be identified, contact Red Hat support . Name : CephMgrIsAbsent Message : Storage metrics collector service not available anymore. Description : Ceph Manager has disappeared from Prometheus target discovery. Severity : Critical Resolution : Contact Red Hat support Procedure : Inspect the user interface and log, and verify if an update is in progress. If an update is in progress, this alert is temporary. If an update is not in progress, restart the upgrade process. Once the upgrade is complete, check for alerts and operator status. If the issue persists or cannot be identified, contact Red Hat support . Name : CephNodeDown Message : Storage node {{ USDlabels.node }} went down Description : Storage node {{ USDlabels.node }} went down. Check the node immediately. Severity : Critical Resolution : Contact Red Hat support Procedure : Check which node stopped functioning and its cause. Take appropriate actions to recover the node. If node cannot be recovered: See Replacing storage nodes for Red Hat OpenShift Data Foundation Contact Red Hat support . Name : CephClusterErrorState Message : Storage cluster is in error state Description : Storage cluster is in error state for more than 10m. Severity : Critical Resolution : Contact Red Hat support Procedure : Check for alerts and operator status. If the issue cannot be identified, download log files and diagnostic information using must-gather . Open a Support Ticket with Red Hat Support with an attachment of the output of must-gather. Name : CephClusterWarningState Message : Storage cluster is in degraded state Description : Storage cluster is in warning state for more than 10m. Severity : Warning Resolution : Contact Red Hat support Procedure : Check for alerts and operator status. If the issue cannot be identified, download log files and diagnostic information using must-gather . Open a Support Ticket with Red Hat Support with an attachment of the output of must-gather. Name : CephDataRecoveryTakingTooLong Message : Data recovery is slow Description : Data recovery has been active for too long. Severity : Warning Resolution : Contact Red Hat support Name : CephOSDDiskNotResponding Message : Disk not responding Description : Disk device {{ USDlabels.device }} not responding, on host {{ USDlabels.host }}. Severity : Critical Resolution : Contact Red Hat support Name : CephOSDDiskUnavailable Message : Disk not accessible Description : Disk device {{ USDlabels.device }} not accessible on host {{ USDlabels.host }}. Severity : Critical Resolution : Contact Red Hat support Name : CephPGRepairTakingTooLong Message : Self heal problems detected Description : Self heal operations taking too long. Severity : Warning Resolution : Contact Red Hat support Name : CephMonHighNumberOfLeaderChanges Message : Storage Cluster has seen many leader changes recently. Description : 'Ceph Monitor "{{ USDlabels.job }}": instance {{ USDlabels.instance }} has seen {{ USDvalue printf "%.2f" }} leader changes per minute recently.' Severity : Warning Resolution : Contact Red Hat support Name : CephMonQuorumAtRisk Message : Storage quorum at risk Description : Storage cluster quorum is low. Severity : Critical Resolution : Contact Red Hat support Name : ClusterObjectStoreState Message : Cluster Object Store is in an unhealthy state. Check Ceph cluster health . Description : Cluster Object Store is in an unhealthy state for more than 15s. Check Ceph cluster health . Severity : Critical Resolution : Contact Red Hat support Procedure : Check the CephObjectStore CR instance. Contact Red Hat support . Name : CephOSDFlapping Message : Storage daemon osd.x has restarted 5 times in the last 5 minutes. Check the pod events or Ceph status to find out the cause . Description : Storage OSD restarts more than 5 times in 5 minutes . Severity : Critical Resolution : Contact Red Hat support Name : OdfPoolMirroringImageHealth Message : Mirroring image(s) (PV) in the pool <pool-name> are in Warning state for more than a 1m. Mirroring might not work as expected. Description : Disaster recovery is failing for one or a few applications. Severity : Warning Resolution : Contact Red Hat support Name : OdfMirrorDaemonStatus Message : Mirror daemon is unhealthy . Description : Disaster recovery is failing for the entire cluster. Mirror daemon is in an unhealthy status for more than 1m. Mirroring on this cluster is not working as expected. Severity : Critical Resolution : Contact Red Hat support 7.2. Resolving cluster health issues There is a finite set of possible health messages that a Red Hat Ceph Storage cluster can raise that show in the OpenShift Data Foundation user interface. These are defined as health checks which have unique identifiers. The identifier is a terse pseudo-human-readable string that is intended to enable tools to make sense of health checks, and present them in a way that reflects their meaning. Click the health code below for more information and troubleshooting. Health code Description MON_DISK_LOW One or more Ceph Monitors are low on disk space. 7.2.1. MON_DISK_LOW This alert triggers if the available space on the file system storing the monitor database as a percentage, drops below mon_data_avail_warn (default: 15%). This may indicate that some other process or user on the system is filling up the same file system used by the monitor. It may also indicate that the monitor's database is large. Note The paths to the file system differ depending on the deployment of your mons. You can find the path to where the mon is deployed in storagecluster.yaml . Example paths: Mon deployed over PVC path: /var/lib/ceph/mon Mon deployed over hostpath: /var/lib/rook/mon In order to clear up space, view the high usage files in the file system and choose which to delete. To view the files, run: Replace <path-in-the-mon-node> with the path to the file system where mons are deployed. 7.3. Resolving cluster alerts There is a finite set of possible health alerts that a Red Hat Ceph Storage cluster can raise that show in the OpenShift Data Foundation user interface. These are defined as health alerts which have unique identifiers. The identifier is a terse pseudo-human-readable string that is intended to enable tools to make sense of health checks, and present them in a way that reflects their meaning. Click the health alert for more information and troubleshooting. Table 7.1. Types of cluster health alerts Health alert Overview CephClusterCriticallyFull Storage cluster utilization has crossed 80%. CephClusterErrorState Storage cluster is in an error state for more than 10 minutes. CephClusterNearFull Storage cluster is nearing full capacity. Data deletion or cluster expansion is required. CephClusterReadOnly Storage cluster is read-only now and needs immediate data deletion or cluster expansion. CephClusterWarningState Storage cluster is in a warning state for more than 10 mins. CephDataRecoveryTakingTooLong Data recovery has been active for too long. CephMdsCacheUsageHigh Ceph metadata service (MDS) cache usage for the MDS daemon has exceeded 95% of the mds_cache_memory_limit . CephMdsCpuUsageHigh Ceph MDS CPU usage for the MDS daemon has exceeded the threshold for adequate performance. CephMdsMissingReplicas Minimum required replicas for storage metadata service not available. Might affect the working of the storage cluster. CephMgrIsAbsent Ceph Manager has disappeared from Prometheus target discovery. CephMgrIsMissingReplicas Ceph manager is missing replicas. Thispts health status reporting and will cause some of the information reported by the ceph status command to be missing or stale. In addition, the Ceph manager is responsible for a manager framework aimed at expanding the existing capabilities of Ceph. CephMonHighNumberOfLeaderChanges The Ceph monitor leader is being changed an unusual number of times. CephMonQuorumAtRisk Storage cluster quorum is low. CephMonQuorumLost The number of monitor pods in the storage cluster are not enough. CephMonVersionMismatch There are different versions of Ceph Mon components running. CephNodeDown A storage node went down. Check the node immediately. The alert should contain the node name. CephOSDCriticallyFull Utilization of back-end Object Storage Device (OSD) has crossed 80%. Free up some space immediately or expand the storage cluster or contact support. CephOSDDiskNotResponding A disk device is not responding on one of the hosts. CephOSDDiskUnavailable A disk device is not accessible on one of the hosts. CephOSDFlapping Ceph storage OSD flapping. CephOSDNearFull One of the OSD storage devices is nearing full. CephOSDSlowOps OSD requests are taking too long to process. CephOSDVersionMismatch There are different versions of Ceph OSD components running. CephPGRepairTakingTooLong Self-healing operations are taking too long. CephPoolQuotaBytesCriticallyExhausted Storage pool quota usage has crossed 90%. CephPoolQuotaBytesNearExhaustion Storage pool quota usage has crossed 70%. OSDCPULoadHigh CPU usage in the OSD container on a specific pod has exceeded 80%, potentially affecting the performance of the OSD. PersistentVolumeUsageCritical Persistent Volume Claim usage has exceeded more than 85% of its capacity. PersistentVolumeUsageNearFull Persistent Volume Claim usage has exceeded more than 75% of its capacity. 7.3.1. CephClusterCriticallyFull Meaning Storage cluster utilization has crossed 80% and will become read-only at 85%. Your Ceph cluster will become read-only once utilization crosses 85%. Free up some space or expand the storage cluster immediately. It is common to see alerts related to Object Storage Device (OSD) full or near full prior to this alert. Impact High Diagnosis Scaling storage Depending on the type of cluster, you need to add storage devices, nodes, or both. For more information, see the Scaling storage guide . Mitigation Deleting information If it is not possible to scale up the cluster, you need to delete information to free up some space. 7.3.2. CephClusterErrorState Meaning This alert reflects that the storage cluster is in ERROR state for an unacceptable amount of time and thispts the storage availability. Check for other alerts that would have triggered prior to this one and troubleshoot those alerts first. Impact Critical Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. If the basic health of the running pods, node affinity and resource availability on the nodes are verified, run the Ceph tools to get the status of the storage components. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.3. CephClusterNearFull Meaning Storage cluster utilization has crossed 75% and will become read-only at 85%. Free up some space or expand the storage cluster. Impact Critical Diagnosis Scaling storage Depending on the type of cluster, you need to add storage devices, nodes, or both. For more information, see the Scaling storage guide . Mitigation Deleting information If it is not possible to scale up the cluster, you need to delete information in order to free up some space. 7.3.4. CephClusterReadOnly Meaning Storage cluster utilization has crossed 85% and will become read-only now. Free up some space or expand the storage cluster immediately. Impact Critical Diagnosis Scaling storage Depending on the type of cluster, you need to add storage devices, nodes, or both. For more information, see the Scaling storage guide . Mitigation Deleting information If it is not possible to scale up the cluster, you need to delete information in order to free up some space. 7.3.5. CephClusterWarningState Meaning This alert reflects that the storage cluster has been in a warning state for an unacceptable amount of time. While the storage operations will continue to function in this state, it is recommended to fix the errors so that the cluster does not get into an error state. Check for other alerts that might have triggered prior to this one and troubleshoot those alerts first. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.6. CephDataRecoveryTakingTooLong Meaning Data recovery is slow. Check whether all the Object Storage Devices (OSDs) are up and running. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.7. CephMdsCacheUsageHigh Meaning When the storage metadata service (MDS) cannot keep its cache usage under the target threshold specified by mds_health_cache_threshold , or 150% of the cache limit set by mds_cache_memory_limit , the MDS sends a health alert to the monitors indicating the cache is too large. As a result, the MDS related operations become slow. Impact High Diagnosis The MDS tries to stay under a reservation of the mds_cache_memory_limit by trimming unused metadata in its cache and recalling cached items in the client caches. It is possible for the MDS to exceed this limit due to slow recall from clients as a result of multiple clients accesing the files. Mitigation Make sure you have enough memory provisioned for MDS cache. Memory resources for the MDS pods need to be updated in the ocs-storageCluster in order to increase the mds_cache_memory_limit . Run the following command to set the memory of MDS pods, for example, 16GB: OpenShift Data Foundation automatically sets mds_cache_memory_limit to half of the MDS pod memory limit. If the memory is set to 8GB using the command, then the operator sets the MDS cache memory limit to 4GB. 7.3.8. CephMdsCpuUsageHigh Meaning The storage metadata service (MDS) serves filesystem metadata. The MDS is crucial for any file creation, rename, deletion, and update operations. MDS by default is allocated two or three CPUs. This does not cause issues as long as there are not too many metadata operations. When the metadata operation load increases enough to trigger this alert, it means the default CPU allocation is unable to cope with load. You need to increase the CPU allocation or run multiple active MDS servers. Impact High Diagnosis Click Workloads Pods . Select the corresponding MDS pod and click on the Metrics tab. There you will see the allocated and used CPU. By default, the alert is fired if the used CPU is 67% of allocated CPU for 6 hours. If this is the case, follow the steps in the mitigation section. Mitigation You need to either increase the allocated CPU or run multiple active MDS. Use the following command to set the number of allocated CPU for MDS, for example, 8: In order to run multiple active MDS servers, use the following command: Make sure you have enough CPU provisioned for MDS depending on the load. Important Always increase the activeMetadataServers by 1 . The scaling of activeMetadataServers works only if you have more than one PV. If there is only one PV that is causing CPU load, look at increasing the CPU resource as described above. 7.3.9. CephMdsMissingReplicas Meaning Minimum required replicas for the storage metadata service (MDS) are not available. MDS is responsible for filing metadata. Degradation of the MDS service can affect how the storage cluster works (related to the CephFS storage class) and should be fixed as soon as possible. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.10. CephMgrIsAbsent Meaning Not having a Ceph manager running the monitoring of the cluster. Persistent Volume Claim (PVC) creation and deletion requests should be resolved as soon as possible. Impact High Diagnosis Verify that the rook-ceph-mgr pod is failing, and restart if necessary. If the Ceph mgr pod restart fails, follow the general pod troubleshooting to resolve the issue. Verify that the Ceph mgr pod is failing: Describe the Ceph mgr pod for more details: <pod_name> Specify the rook-ceph-mgr pod name from the step. Analyze the errors related to resource issues. Delete the pod, and wait for the pod to restart: Follow these steps for general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.11. CephMgrIsMissingReplicas Meaning To resolve this alert, you need to determine the cause of the disappearance of the Ceph manager and restart if necessary. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.12. CephMonHighNumberOfLeaderChanges Meaning In a Ceph cluster there is a redundant set of monitor pods that store critical information about the storage cluster. Monitor pods synchronize periodically to obtain information about the storage cluster. The first monitor pod to get the most updated information becomes the leader, and the other monitor pods will start their synchronization process after asking the leader. A problem in network connection or another kind of problem in one or more monitor pods produces an unusual change of the leader. This situation can negatively affect the storage cluster performance. Impact Medium Important Check for any network issues. If there is a network issue, you need to escalate to the OpenShift Data Foundation team before you proceed with any of the following troubleshooting steps. Diagnosis Print the logs of the affected monitor pod to gather more information about the issue: <rook-ceph-mon-X-yyyy> Specify the name of the affected monitor pod. Alternatively, use the Openshift Web console to open the logs of the affected monitor pod. More information about possible causes is reflected in the log. Perform the general pod troubleshooting steps: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.13. CephMonQuorumAtRisk Meaning Multiple MONs work together to provide redundancy. Each of the MONs keeps a copy of the metadata. The cluster is deployed with 3 MONs, and requires 2 or more MONs to be up and running for quorum and for the storage operations to run. If quorum is lost, access to data is at risk. Impact High Diagnosis Restore the Ceph MON Quorum. For more information, see Restoring ceph-monitor quorum in OpenShift Data Foundation in the Troubleshooting guide . If the restoration of the Ceph MON Quorum fails, follow the general pod troubleshooting to resolve the issue. Perform the following for general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.14. CephMonQuorumLost Meaning In a Ceph cluster there is a redundant set of monitor pods that store critical information about the storage cluster. Monitor pods synchronize periodically to obtain information about the storage cluster. The first monitor pod to get the most updated information becomes the leader, and the other monitor pods will start their synchronization process after asking the leader. A problem in network connection or another kind of problem in one or more monitor pods produces an unusual change of the leader. This situation can negatively affect the storage cluster performance. Impact High Important Check for any network issues. If there is a network issue, you need to escalate to the OpenShift Data Foundation team before you proceed with any of the following troubleshooting steps. Diagnosis Restore the Ceph MON Quorum. For more information, see Restoring ceph-monitor quorum in OpenShift Data Foundation in the Troubleshooting guide . If the restoration of the Ceph MON Quorum fails, follow the general pod troubleshooting to resolve the issue. Alternatively, perform general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.15. CephMonVersionMismatch Meaning Typically this alert triggers during an upgrade that is taking a long time. Impact Medium Diagnosis Check the ocs-operator subscription status and the operator pod health to check if an operator upgrade is in progress. Check the ocs-operator subscription health. The status condition types are CatalogSourcesUnhealthy , InstallPlanMissing , InstallPlanPending , and InstallPlanFailed . The status for each type should be False . Example output: The example output shows a False status for type CatalogSourcesUnHealthly , which means that the catalog sources are healthy. Check the OCS operator pod status to see if there is an OCS operator upgrading in progress. If you determine that the `ocs-operator`is in progress, wait for 5 mins and this alert should resolve itself. If you have waited or see a different error status condition, continue troubleshooting. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.16. CephNodeDown Meaning A node running Ceph pods is down. While storage operations will continue to function as Ceph is designed to deal with a node failure, it is recommended to resolve the issue to minimize the risk of another node going down and affecting storage functions. Impact Medium Diagnosis List all the pods that are running and failing: Important Ensure that you meet the OpenShift Data Foundation resource requirements so that the Object Storage Device (OSD) pods are scheduled on the new node. This may take a few minutes as the Ceph cluster recovers data for the failing but now recovering OSD. To watch this recovery in action, ensure that the OSD pods are correctly placed on the new worker node. Check if the OSD pods that were previously failing are now running: If the previously failing OSD pods have not been scheduled, use the describe command and check the events for reasons the pods were not rescheduled. Describe the events for the failing OSD pod: Find the one or more failing OSD pods: In the events section look for the failure reasons, such as the resources are not being met. In addition, you may use the rook-ceph-toolbox to watch the recovery. This step is optional, but is helpful for large Ceph clusters. To access the toolbox, run the following command: From the rsh command prompt, run the following, and watch for "recovery" under the io section: Determine if there are failed nodes. Get the list of worker nodes, and check for the node status: Describe the node which is of the NotReady status to get more information about the failure: Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.17. CephOSDCriticallyFull Meaning One of the Object Storage Devices (OSDs) is critically full. Expand the cluster immediately. Impact High Diagnosis Deleting data to free up storage space You can delete data, and the cluster will resolve the alert through self healing processes. Important This is only applicable to OpenShift Data Foundation clusters that are near or full but not in read-only mode. Read-only mode prevents any changes that include deleting data, that is, deletion of Persistent Volume Claim (PVC), Persistent Volume (PV) or both. Expanding the storage capacity Current storage size is less than 1 TB You must first assess the ability to expand. For every 1 TB of storage added, the cluster needs to have 3 nodes each with a minimum available 2 vCPUs and 8 GiB memory. You can increase the storage capacity to 4 TB via the add-on and the cluster will resolve the alert through self healing processes. If the minimum vCPU and memory resource requirements are not met, you need to add 3 additional worker nodes to the cluster. Mitigation If your current storage size is equal to 4 TB, contact Red Hat support. Optional: Run the following command to gather the debugging information for the Ceph cluster: 7.3.18. CephOSDDiskNotResponding Meaning A disk device is not responding. Check whether all the Object Storage Devices (OSDs) are up and running. Impact Medium Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. If the basic health of the running pods, node affinity and resource availability on the nodes are verified, run the Ceph tools to get the status of the storage components. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.19. CephOSDDiskUnavailable Meaning A disk device is not accessible on one of the hosts and its corresponding Object Storage Device (OSD) is marked out by the Ceph cluster. This alert is raised when a Ceph node fails to recover within 10 minutes. Impact High Diagnosis Determine the failed node Get the list of worker nodes, and check for the node status: Describe the node which is of NotReady status to get more information on the failure: 7.3.20. CephOSDFlapping Meaning A storage daemon has restarted 5 times in the last 5 minutes. Check the pod events or Ceph status to find out the cause. Impact High Diagnosis Follow the steps in the Flapping OSDs section of the Red Hat Ceph Storage Troubleshooting Guide. Alternatively, follow the steps for general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. If the basic health of the running pods, node affinity and resource availability on the nodes are verified, run the Ceph tools to get the status of the storage components. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.21. CephOSDNearFull Meaning Utilization of back-end storage device Object Storage Device (OSD) has crossed 75% on a host. Impact High Mitigation Free up some space in the cluster, expand the storage cluster, or contact Red Hat support. For more information on scaling storage, see the Scaling storage guide . 7.3.22. CephOSDSlowOps Meaning An Object Storage Device (OSD) with slow requests is every OSD that is not able to service the I/O operations per second (IOPS) in the queue within the time defined by the osd_op_complaint_time parameter. By default, this parameter is set to 30 seconds. Impact Medium Diagnosis More information about the slow requests can be obtained using the Openshift console. Access the OSD pod terminal, and run the following commands: Note The number of the OSD is seen in the pod name. For example, in rook-ceph-osd-0-5d86d4d8d4-zlqkx , <0> is the OSD. Mitigation The main causes of the OSDs having slow requests are: Problems with the underlying hardware or infrastructure, such as, disk drives, hosts, racks, or network switches. Use the Openshift monitoring console to find the alerts or errors about cluster resources. This can give you an idea about the root cause of the slow operations in the OSD. Problems with the network. These problems are usually connected with flapping OSDs. See the Flapping OSDs section of the Red Hat Ceph Storage Troubleshooting Guide If it is a network issue, escalate to the OpenShift Data Foundation team System load. Use the Openshift console to review the metrics of the OSD pod and the node which is running the OSD. Adding or assigning more resources can be a possible solution. 7.3.23. CephOSDVersionMismatch Meaning Typically this alert triggers during an upgrade that is taking a long time. Impact Medium Diagnosis Check the ocs-operator subscription status and the operator pod health to check if an operator upgrade is in progress. Check the ocs-operator subscription health. The status condition types are CatalogSourcesUnhealthy , InstallPlanMissing , InstallPlanPending , and InstallPlanFailed . The status for each type should be False . Example output: The example output shows a False status for type CatalogSourcesUnHealthly , which means that the catalog sources are healthy. Check the OCS operator pod status to see if there is an OCS operator upgrading in progress. If you determine that the `ocs-operator`is in progress, wait for 5 mins and this alert should resolve itself. If you have waited or see a different error status condition, continue troubleshooting. 7.3.24. CephPGRepairTakingTooLong Meaning Self-healing operations are taking too long. Impact High Diagnosis Check for inconsistent Placement Groups (PGs), and repair them. For more information, see the Red Hat Knowledgebase solution Handle Inconsistent Placement Groups in Ceph . 7.3.25. CephPoolQuotaBytesCriticallyExhausted Meaning One or more pools has reached, or is very close to reaching, its quota. The threshold to trigger this error condition is controlled by the mon_pool_quota_crit_threshold configuration option. Impact High Mitigation Adjust the pool quotas. Run the following commands to fully remove or adjust the pool quotas up or down: Setting the quota value to 0 will disable the quota. 7.3.26. CephPoolQuotaBytesNearExhaustion Meaning One or more pools is approaching a configured fullness threshold. One threshold that can trigger this warning condition is the mon_pool_quota_warn_threshold configuration option. Impact High Mitigation Adjust the pool quotas. Run the following commands to fully remove or adjust the pool quotas up or down: Setting the quota value to 0 will disable the quota. 7.3.27. OSDCPULoadHigh Meaning OSD is a critical component in Ceph storage, responsible for managing data placement and recovery. High CPU usage in the OSD container suggests increased processing demands, potentially leading to degraded storage performance. Impact High Diagnosis Navigate to the Kubernetes dashboard or equivalent. Access the Workloads section and select the relevant pod associated with the OSD alert. Click the Metrics tab to view CPU metrics for the OSD container. Verify that the CPU usage exceeds 80% over a significant period (as specified in the alert configuration). Mitigation If the OSD CPU usage is consistently high, consider taking the following steps: Evaluate the overall storage cluster performance and identify the OSDs contributing to high CPU usage. Increase the number of OSDs in the cluster by adding more new storage devices in the existing nodes or adding new nodes with new storage devices. Review the Scaling storage4 for instructions to help distribute the load and improve overall system performance. 7.3.28. PersistentVolumeUsageCritical Meaning A Persistent Volume Claim (PVC) is nearing its full capacity and may lead to data loss if not attended to timely. Impact High Mitigation Expand the PVC size to increase the capacity. Log in to the OpenShift Web Console. Click Storage PersistentVolumeClaim . Select openshift-storage from the Project drop-down list. On the PVC you want to expand, click Action menu (...) Expand PVC . Update the Total size to the desired size. Click Expand . Alternatively, you can delete unnecessary data that may be taking up space. 7.3.29. PersistentVolumeUsageNearFull Meaning A Persistent Volume Claim (PVC) is nearing its full capacity and may lead to data loss if not attended to timely. Impact High Mitigation Expand the PVC size to increase the capacity. Log in to the OpenShift Web Console. Click Storage PersistentVolumeClaim . Select openshift-storage from the Project drop-down list. On the PVC you want to expand, click Action menu (...) Expand PVC . Update the Total size to the desired size. Click Expand . Alternatively, you can delete unnecessary data that may be taking up space. 7.4. Finding the error code of an unhealthy bucket Procedure In the OpenShift Web Console, click Storage Object Storage . Click the Object Bucket Claims tab. Look for the object bucket claims (OBCs) that are not in Bound state and click on it. Click the Events tab and do one of the following: Look for events that might hint you about the current state of the bucket. Click the YAML tab and look for related errors around the status and mode sections of the YAML. If the OBC is in Pending state. the error might appear in the product logs. However, in this case, it is recommended to verify that all the variables provided are accurate. 7.5. Finding the error code of an unhealthy namespace store resource Procedure In the OpenShift Web Console, click Storage Object Storage . Click the Namespace Store tab. Look for the namespace store resources that are not in Bound state and click on it. Click the Events tab and do one of the following: Look for events that might hint you about the current state of the resource. Click the YAML tab and look for related errors around the status and mode sections of the YAML. 7.6. Recovering pods When a first node (say NODE1 ) goes to NotReady state because of some issue, the hosted pods that are using PVC with ReadWriteOnce (RWO) access mode try to move to the second node (say NODE2 ) but get stuck due to multi-attach error. In such a case, you can recover MON, OSD, and application pods by using the following steps. Procedure Power off NODE1 (from AWS or vSphere side) and ensure that NODE1 is completely down. Force delete the pods on NODE1 by using the following command: 7.7. Recovering from EBS volume detach When an OSD or MON elastic block storage (EBS) volume where the OSD disk resides is detached from the worker Amazon EC2 instance, the volume gets reattached automatically within one or two minutes. However, the OSD pod gets into a CrashLoopBackOff state. To recover and bring back the pod to Running state, you must restart the EC2 instance. 7.8. Enabling and disabling debug logs for rook-ceph-operator Enable the debug logs for the rook-ceph-operator to obtain information about failures that help in troubleshooting issues. Procedure Enabling the debug logs Edit the configmap of the rook-ceph-operator. Add the ROOK_LOG_LEVEL: DEBUG parameter in the rook-ceph-operator-config yaml file to enable the debug logs for rook-ceph-operator. Now, the rook-ceph-operator logs consist of the debug information. Disabling the debug logs Edit the configmap of the rook-ceph-operator. Add the ROOK_LOG_LEVEL: INFO parameter in the rook-ceph-operator-config yaml file to disable the debug logs for rook-ceph-operator. 7.9. Resolving low Ceph monitor count alert The CephMonLowNumber alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the low number of Ceph monitor count when your internal mode deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains in the deployment. You can increase the Ceph monitor count to improve the availability of cluster. Procedure In the CephMonLowNumber alert of the notification panel or Alert Center of OpenShift Web Console, click Configure . In the Configure Ceph Monitor pop up, click Update count. In the pop up, the recommended monitor count depending on the number of failure zones is shown. In the Configure CephMon pop up, update the monitor count value based on the recommended value and click Save changes . 7.10. Troubleshooting unhealthy blocklisted nodes 7.10.1. ODFRBDClientBlocked Meaning This alert indicates that an RADOS Block Device (RBD) client might be blocked by Ceph on a specific node within your Kubernetes cluster. The blocklisting occurs when the ocs_rbd_client_blocklisted metric reports a value of 1 for the node. Additionally, there are pods in a CreateContainerError state on the same node. The blocklisting can potentially result in the filesystem for the Persistent Volume Claims (PVCs) using RBD becoming read-only. It is crucial to investigate this alert to prevent any disruption to your storage cluster. Impact High Diagnosis The blocklisting of an RBD client can occur due to several factors, such as network or cluster slowness. In certain cases, the exclusive lock contention among three contending clients (workload, mirror daemon, and manager/scheduler) can lead to the blocklist. Mitigation Taint the blocklisted node: In Kubernetes, consider tainting the node that is blocklisted to trigger the eviction of pods to another node. This approach relies on the assumption that the unmounting/unmapping process progresses gracefully. Once the pods have been successfully evicted, the blocklisted node can be untainted, allowing the blocklist to be cleared. The pods can then be moved back to the untainted node. Reboot the blocklisted node: If tainting the node and evicting the pods do not resolve the blocklisting issue, a reboot of the blocklisted node can be attempted. This step may help alleviate any underlying issues causing the blocklist and restore normal functionality. Important Investigating and resolving the blocklist issue promptly is essential to avoid any further impact on the storage cluster.
|
[
"du -a <path-in-the-mon-node> |sort -n -r |head -n10",
"oc project openshift-storage",
"oc get pod | grep rook-ceph",
"Examine the output for a rook-ceph that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc project openshift-storage",
"get pod | grep {ceph-component}",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc project openshift-storage",
"get pod | grep rook-ceph-osd",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"memory\": \"16Gi\"},\"requests\": {\"memory\": \"16Gi\"}}}}}'",
"patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"cpu\": \"8\"}, \"requests\": {\"cpu\": \"8\"}}}}}'",
"patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"managedResources\": {\"cephFilesystems\":{\"activeMetadataServers\": 2}}}}'",
"oc project openshift-storage",
"get pod | grep rook-ceph-mds",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc get pods | grep mgr",
"oc describe pods/ <pod_name>",
"oc get pods | grep mgr",
"oc project openshift-storage",
"get pod | grep rook-ceph-mgr",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc project openshift-storage",
"get pod | grep rook-ceph-mgr",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc logs <rook-ceph-mon-X-yyyy> -n openshift-storage",
"oc project openshift-storage",
"get pod | grep {ceph-component}",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc project openshift-storage",
"get pod | grep rook-ceph-mon",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc project openshift-storage",
"get pod | grep {ceph-component}",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc get sub USD(oc get pods -n openshift-storage | grep -v ocs-operator) -n openshift-storage -o json | jq .status.conditions",
"[ { \"lastTransitionTime\": \"2021-01-26T19:21:37Z\", \"message\": \"all available catalogsources are healthy\", \"reason\": \"AllCatalogSourcesHealthy\", \"status\": \"False\", \"type\": \"CatalogSourcesUnhealthy\" } ]",
"oc get pod -n openshift-storage | grep ocs-operator OCSOP=USD(oc get pod -n openshift-storage -o custom-columns=POD:.metadata.name --no-headers | grep ocs-operator) echo USDOCSOP oc get pod/USD{OCSOP} -n openshift-storage oc describe pod/USD{OCSOP} -n openshift-storage",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"-n openshift-storage get pods",
"-n openshift-storage get pods",
"-n openshift-storage get pods | grep osd",
"-n openshift-storage describe pods/<osd_podname_ from_the_ previous step>",
"TOOLS_POD=USD(oc get pods -n openshift-storage -l app=rook-ceph-tools -o name) rsh -n openshift-storage USDTOOLS_POD",
"ceph status",
"get nodes --selector='node-role.kubernetes.io/worker','!node-role.kubernetes.io/infra'",
"describe node <node_name>",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc project openshift-storage",
"oc get pod | grep rook-ceph",
"Examine the output for a rook-ceph that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"get nodes --selector='node-role.kubernetes.io/worker','!node-role.kubernetes.io/infra'",
"describe node <node_name>",
"oc project openshift-storage",
"oc get pod | grep rook-ceph",
"Examine the output for a rook-ceph that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"ceph daemon osd.<id> ops",
"ceph daemon osd.<id> dump_historic_ops",
"oc get sub USD(oc get pods -n openshift-storage | grep -v ocs-operator) -n openshift-storage -o json | jq .status.conditions",
"[ { \"lastTransitionTime\": \"2021-01-26T19:21:37Z\", \"message\": \"all available catalogsources are healthy\", \"reason\": \"AllCatalogSourcesHealthy\", \"status\": \"False\", \"type\": \"CatalogSourcesUnhealthy\" } ]",
"oc get pod -n openshift-storage | grep ocs-operator OCSOP=USD(oc get pod -n openshift-storage -o custom-columns=POD:.metadata.name --no-headers | grep ocs-operator) echo USDOCSOP oc get pod/USD{OCSOP} -n openshift-storage oc describe pod/USD{OCSOP} -n openshift-storage",
"ceph osd pool set-quota <pool> max_bytes <bytes>",
"ceph osd pool set-quota <pool> max_objects <objects>",
"ceph osd pool set-quota <pool> max_bytes <bytes>",
"ceph osd pool set-quota <pool> max_objects <objects>",
"oc delete pod <pod-name> --grace-period=0 --force",
"oc edit configmap rook-ceph-operator-config",
"... data: # The logging level for the operator: INFO | DEBUG ROOK_LOG_LEVEL: DEBUG",
"oc edit configmap rook-ceph-operator-config",
"... data: # The logging level for the operator: INFO | DEBUG ROOK_LOG_LEVEL: INFO"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/troubleshooting_openshift_data_foundation/troubleshooting-alerts-and-errors-in-openshift-data-foundation
|
17.3. Booleans
|
17.3. Booleans SELinux is based on the least level of access required for a service to run. Services can be run in a variety of ways; therefore, you need to specify how you run your services. Use the following Booleans to set up SELinux: named_write_master_zones When disabled, this Boolean prevents named from writing to zone files or directories labeled with the named_zone_t type. The daemon does not usually need to write to zone files; but in the case that it needs to, or if a secondary server needs to write to zone files, enable this Boolean to allow this action. named_tcp_bind_http_port When enabled, this Boolean allows BIND to bind an Apache port. Note Due to the continuous development of the SELinux policy, the list above might not contain all Booleans related to the service at all times. To list them, enter the following command: Enter the following command to view description of a particular Boolean: Note that the additional policycoreutils-devel package providing the sepolicy utility is required for this command to work.
|
[
"~]USD getsebool -a | grep service_name",
"~]USD sepolicy booleans -b boolean_name"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-bind-booleans
|
Chapter 3. Node Feature Discovery Operator
|
Chapter 3. Node Feature Discovery Operator Learn about the Node Feature Discovery (NFD) Operator and how you can use it to expose node-level information by orchestrating Node Feature Discovery, a Kubernetes add-on for detecting hardware features and system configuration. The Node Feature Discovery Operator (NFD) manages the detection of hardware features and configuration in an OpenShift Container Platform cluster by labeling the nodes with hardware-specific information. NFD labels the host with node-specific attributes, such as PCI cards, kernel, operating system version, and so on. The NFD Operator can be found on the Operator Hub by searching for "Node Feature Discovery". 3.1. Installing the Node Feature Discovery Operator The Node Feature Discovery (NFD) Operator orchestrates all resources needed to run the NFD daemon set. As a cluster administrator, you can install the NFD Operator by using the OpenShift Container Platform CLI or the web console. 3.1.1. Installing the NFD Operator using the CLI As a cluster administrator, you can install the NFD Operator using the CLI. Prerequisites An OpenShift Container Platform cluster Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a namespace for the NFD Operator. Create the following Namespace custom resource (CR) that defines the openshift-nfd namespace, and then save the YAML in the nfd-namespace.yaml file. Set cluster-monitoring to "true" . apiVersion: v1 kind: Namespace metadata: name: openshift-nfd labels: name: openshift-nfd openshift.io/cluster-monitoring: "true" Create the namespace by running the following command: USD oc create -f nfd-namespace.yaml Install the NFD Operator in the namespace you created in the step by creating the following objects: Create the following OperatorGroup CR and save the YAML in the nfd-operatorgroup.yaml file: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-nfd- name: openshift-nfd namespace: openshift-nfd spec: targetNamespaces: - openshift-nfd Create the OperatorGroup CR by running the following command: USD oc create -f nfd-operatorgroup.yaml Create the following Subscription CR and save the YAML in the nfd-sub.yaml file: Example Subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nfd namespace: openshift-nfd spec: channel: "stable" installPlanApproval: Automatic name: nfd source: redhat-operators sourceNamespace: openshift-marketplace Create the subscription object by running the following command: USD oc create -f nfd-sub.yaml Change to the openshift-nfd project: USD oc project openshift-nfd Verification To verify that the Operator deployment is successful, run: USD oc get pods Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 10m A successful deployment shows a Running status. 3.1.2. Installing the NFD Operator using the web console As a cluster administrator, you can install the NFD Operator using the web console. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Node Feature Discovery from the list of available Operators, and then click Install . On the Install Operator page, select A specific namespace on the cluster , and then click Install . You do not need to create a namespace because it is created for you. Verification To verify that the NFD Operator installed successfully: Navigate to the Operators Installed Operators page. Ensure that Node Feature Discovery is listed in the openshift-nfd project with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. Troubleshooting If the Operator does not appear as installed, troubleshoot further: Navigate to the Operators Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Navigate to the Workloads Pods page and check the logs for pods in the openshift-nfd project. 3.2. Using the Node Feature Discovery Operator The Node Feature Discovery (NFD) Operator orchestrates all resources needed to run the Node-Feature-Discovery daemon set by watching for a NodeFeatureDiscovery custom resource (CR). Based on the NodeFeatureDiscovery CR, the Operator creates the operand (NFD) components in the selected namespace. You can edit the CR to use another namespace, image, image pull policy, and nfd-worker-conf config map, among other options. As a cluster administrator, you can create a NodeFeatureDiscovery CR by using the OpenShift CLI ( oc ) or the web console. Note Starting with version 4.12, the operand.image field in the NodeFeatureDiscovery CR is mandatory. If the NFD Operator is deployed by using Operator Lifecycle Manager (OLM), OLM automatically sets the operand.image field. If you create the NodeFeatureDiscovery CR by using the OpenShift Container Platform CLI or the OpenShift Container Platform web console, you must set the operand.image field explicitly. 3.2.1. Creating a NodeFeatureDiscovery CR by using the CLI As a cluster administrator, you can create a NodeFeatureDiscovery CR instance by using the OpenShift CLI ( oc ). Note The spec.operand.image setting requires a -rhel9 image to be defined for use with OpenShift Container Platform releases 4.13 and later. The following example shows the use of -rhel9 to acquire the correct image. Prerequisites You have access to an OpenShift Container Platform cluster You installed the OpenShift CLI ( oc ). You logged in as a user with cluster-admin privileges. You installed the NFD Operator. Procedure Create a NodeFeatureDiscovery CR: Example NodeFeatureDiscovery CR apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance namespace: openshift-nfd spec: instance: "" # instance is empty by default topologyupdater: false # False by default operand: image: registry.redhat.io/openshift4/ose-node-feature-discovery-rhel9:v4.18 1 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - "BMI1" - "BMI2" - "CLMUL" - "CMOV" - "CX16" - "ERMS" - "F16C" - "HTT" - "LZCNT" - "MMX" - "MMXEXT" - "NX" - "POPCNT" - "RDRAND" - "RDSEED" - "RDTSCP" - "SGX" - "SSE" - "SSE2" - "SSE3" - "SSE4.1" - "SSE4.2" - "SSSE3" attributeWhitelist: kernel: kconfigFile: "/path/to/kconfig" configOpts: - "NO_HZ" - "X86" - "DMI" pci: deviceClassWhitelist: - "0200" - "03" - "12" deviceLabelFields: - "class" customConfig: configData: | - name: "more.kernel.features" matchOn: - loadedKMod: ["example_kmod3"] 1 The operand.image field is mandatory. Create the NodeFeatureDiscovery CR by running the following command: USD oc apply -f <filename> Verification Check that the NodeFeatureDiscovery CR was created by running the following command: USD oc get pods Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 11m nfd-master-hcn64 1/1 Running 0 60s nfd-master-lnnxx 1/1 Running 0 60s nfd-master-mp6hr 1/1 Running 0 60s nfd-worker-vgcz9 1/1 Running 0 60s nfd-worker-xqbws 1/1 Running 0 60s A successful deployment shows a Running status. 3.2.2. Creating a NodeFeatureDiscovery CR by using the CLI in a disconnected environment As a cluster administrator, you can create a NodeFeatureDiscovery CR instance by using the OpenShift CLI ( oc ). Prerequisites You have access to an OpenShift Container Platform cluster You installed the OpenShift CLI ( oc ). You logged in as a user with cluster-admin privileges. You installed the NFD Operator. You have access to a mirror registry with the required images. You installed the skopeo CLI tool. Procedure Determine the digest of the registry image: Run the following command: USD skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:<openshift_version> Example command USD skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:v4.12 Inspect the output to identify the image digest: Example output { ... "Digest": "sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef", ... } Use the skopeo CLI tool to copy the image from registry.redhat.io to your mirror registry, by running the following command: skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@<image_digest> docker://<mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest> Example command skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef docker://<your-mirror-registry>/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef Create a NodeFeatureDiscovery CR: Example NodeFeatureDiscovery CR apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance spec: operand: image: <mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest> 1 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - "BMI1" - "BMI2" - "CLMUL" - "CMOV" - "CX16" - "ERMS" - "F16C" - "HTT" - "LZCNT" - "MMX" - "MMXEXT" - "NX" - "POPCNT" - "RDRAND" - "RDSEED" - "RDTSCP" - "SGX" - "SSE" - "SSE2" - "SSE3" - "SSE4.1" - "SSE4.2" - "SSSE3" attributeWhitelist: kernel: kconfigFile: "/path/to/kconfig" configOpts: - "NO_HZ" - "X86" - "DMI" pci: deviceClassWhitelist: - "0200" - "03" - "12" deviceLabelFields: - "class" customConfig: configData: | - name: "more.kernel.features" matchOn: - loadedKMod: ["example_kmod3"] 1 The operand.image field is mandatory. Create the NodeFeatureDiscovery CR by running the following command: USD oc apply -f <filename> Verification Check the status of the NodeFeatureDiscovery CR by running the following command: USD oc get nodefeaturediscovery nfd-instance -o yaml Check that the pods are running without ImagePullBackOff errors by running the following command: USD oc get pods -n <nfd_namespace> 3.2.3. Creating a NodeFeatureDiscovery CR by using the web console As a cluster administrator, you can create a NodeFeatureDiscovery CR by using the OpenShift Container Platform web console. Prerequisites You have access to an OpenShift Container Platform cluster You logged in as a user with cluster-admin privileges. You installed the NFD Operator. Procedure Navigate to the Operators Installed Operators page. In the Node Feature Discovery section, under Provided APIs , click Create instance . Edit the values of the NodeFeatureDiscovery CR. Click Create . Note Starting with version 4.12, the operand.image field in the NodeFeatureDiscovery CR is mandatory. If the NFD Operator is deployed by using Operator Lifecycle Manager (OLM), OLM automatically sets the operand.image field. If you create the NodeFeatureDiscovery CR by using the OpenShift Container Platform CLI or the OpenShift Container Platform web console, you must set the operand.image field explicitly. 3.3. Configuring the Node Feature Discovery Operator 3.3.1. core The core section contains common configuration settings that are not specific to any particular feature source. core.sleepInterval core.sleepInterval specifies the interval between consecutive passes of feature detection or re-detection, and thus also the interval between node re-labeling. A non-positive value implies infinite sleep interval; no re-detection or re-labeling is done. This value is overridden by the deprecated --sleep-interval command line flag, if specified. Example usage core: sleepInterval: 60s 1 The default value is 60s . core.sources core.sources specifies the list of enabled feature sources. A special value all enables all feature sources. This value is overridden by the deprecated --sources command line flag, if specified. Default: [all] Example usage core: sources: - system - custom core.labelWhiteList core.labelWhiteList specifies a regular expression for filtering feature labels based on the label name. Non-matching labels are not published. The regular expression is only matched against the basename part of the label, the part of the name after '/'. The label prefix, or namespace, is omitted. This value is overridden by the deprecated --label-whitelist command line flag, if specified. Default: null Example usage core: labelWhiteList: '^cpu-cpuid' core.noPublish Setting core.noPublish to true disables all communication with the nfd-master . It is effectively a dry run flag; nfd-worker runs feature detection normally, but no labeling requests are sent to nfd-master . This value is overridden by the --no-publish command line flag, if specified. Example: Example usage core: noPublish: true 1 The default value is false . core.klog The following options specify the logger configuration, most of which can be dynamically adjusted at run-time. The logger options can also be specified using command line flags, which take precedence over any corresponding config file options. core.klog.addDirHeader If set to true , core.klog.addDirHeader adds the file directory to the header of the log messages. Default: false Run-time configurable: yes core.klog.alsologtostderr Log to standard error as well as files. Default: false Run-time configurable: yes core.klog.logBacktraceAt When logging hits line file:N, emit a stack trace. Default: empty Run-time configurable: yes core.klog.logDir If non-empty, write log files in this directory. Default: empty Run-time configurable: no core.klog.logFile If not empty, use this log file. Default: empty Run-time configurable: no core.klog.logFileMaxSize core.klog.logFileMaxSize defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0 , the maximum file size is unlimited. Default: 1800 Run-time configurable: no core.klog.logtostderr Log to standard error instead of files Default: true Run-time configurable: yes core.klog.skipHeaders If core.klog.skipHeaders is set to true , avoid header prefixes in the log messages. Default: false Run-time configurable: yes core.klog.skipLogHeaders If core.klog.skipLogHeaders is set to true , avoid headers when opening log files. Default: false Run-time configurable: no core.klog.stderrthreshold Logs at or above this threshold go to stderr. Default: 2 Run-time configurable: yes core.klog.v core.klog.v is the number for the log level verbosity. Default: 0 Run-time configurable: yes core.klog.vmodule core.klog.vmodule is a comma-separated list of pattern=N settings for file-filtered logging. Default: empty Run-time configurable: yes 3.3.2. sources The sources section contains feature source specific configuration parameters. sources.cpu.cpuid.attributeBlacklist Prevent publishing cpuid features listed in this option. This value is overridden by sources.cpu.cpuid.attributeWhitelist , if specified. Default: [BMI1, BMI2, CLMUL, CMOV, CX16, ERMS, F16C, HTT, LZCNT, MMX, MMXEXT, NX, POPCNT, RDRAND, RDSEED, RDTSCP, SGX, SGXLC, SSE, SSE2, SSE3, SSE4.1, SSE4.2, SSSE3] Example usage sources: cpu: cpuid: attributeBlacklist: [MMX, MMXEXT] sources.cpu.cpuid.attributeWhitelist Only publish the cpuid features listed in this option. sources.cpu.cpuid.attributeWhitelist takes precedence over sources.cpu.cpuid.attributeBlacklist . Default: empty Example usage sources: cpu: cpuid: attributeWhitelist: [AVX512BW, AVX512CD, AVX512DQ, AVX512F, AVX512VL] sources.kernel.kconfigFile sources.kernel.kconfigFile is the path of the kernel config file. If empty, NFD runs a search in the well-known standard locations. Default: empty Example usage sources: kernel: kconfigFile: "/path/to/kconfig" sources.kernel.configOpts sources.kernel.configOpts represents kernel configuration options to publish as feature labels. Default: [NO_HZ, NO_HZ_IDLE, NO_HZ_FULL, PREEMPT] Example usage sources: kernel: configOpts: [NO_HZ, X86, DMI] sources.pci.deviceClassWhitelist sources.pci.deviceClassWhitelist is a list of PCI device class IDs for which to publish a label. It can be specified as a main class only (for example, 03 ) or full class-subclass combination (for example 0300 ). The former implies that all subclasses are accepted. The format of the labels can be further configured with deviceLabelFields . Default: ["03", "0b40", "12"] Example usage sources: pci: deviceClassWhitelist: ["0200", "03"] sources.pci.deviceLabelFields sources.pci.deviceLabelFields is the set of PCI ID fields to use when constructing the name of the feature label. Valid fields are class , vendor , device , subsystem_vendor and subsystem_device . Default: [class, vendor] Example usage sources: pci: deviceLabelFields: [class, vendor, device] With the example config above, NFD would publish labels such as feature.node.kubernetes.io/pci-<class-id>_<vendor-id>_<device-id>.present=true sources.usb.deviceClassWhitelist sources.usb.deviceClassWhitelist is a list of USB device class IDs for which to publish a feature label. The format of the labels can be further configured with deviceLabelFields . Default: ["0e", "ef", "fe", "ff"] Example usage sources: usb: deviceClassWhitelist: ["ef", "ff"] sources.usb.deviceLabelFields sources.usb.deviceLabelFields is the set of USB ID fields from which to compose the name of the feature label. Valid fields are class , vendor , and device . Default: [class, vendor, device] Example usage sources: pci: deviceLabelFields: [class, vendor] With the example config above, NFD would publish labels like: feature.node.kubernetes.io/usb-<class-id>_<vendor-id>.present=true . sources.custom sources.custom is the list of rules to process in the custom feature source to create user-specific labels. Default: empty Example usage source: custom: - name: "my.custom.feature" matchOn: - loadedKMod: ["e1000e"] - pciId: class: ["0200"] vendor: ["8086"] 3.4. About the NodeFeatureRule custom resource NodeFeatureRule objects are a NodeFeatureDiscovery custom resource designed for rule-based custom labeling of nodes. Some use cases include application-specific labeling or distribution by hardware vendors to create specific labels for their devices. NodeFeatureRule objects provide a method to create vendor- or application-specific labels and taints. It uses a flexible rule-based mechanism for creating labels and optionally taints based on node features. 3.5. Using the NodeFeatureRule custom resource Create a NodeFeatureRule object to label nodes if a set of rules match the conditions. Procedure Create a custom resource file named nodefeaturerule.yaml that contains the following text: apiVersion: nfd.openshift.io/v1 kind: NodeFeatureRule metadata: name: example-rule spec: rules: - name: "example rule" labels: "example-custom-feature": "true" # Label is created if all of the rules below match matchFeatures: # Match if "veth" kernel module is loaded - feature: kernel.loadedmodule matchExpressions: veth: {op: Exists} # Match if any PCI device with vendor 8086 exists in the system - feature: pci.device matchExpressions: vendor: {op: In, value: ["8086"]} This custom resource specifies that labelling occurs when the veth module is loaded and any PCI device with vendor code 8086 exists in the cluster. Apply the nodefeaturerule.yaml file to your cluster by running the following command: USD oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.13.6/examples/nodefeaturerule.yaml The example applies the feature label on nodes with the veth module loaded and any PCI device with vendor code 8086 exists. Note A relabeling delay of up to 1 minute might occur. 3.6. Using the NFD Topology Updater The Node Feature Discovery (NFD) Topology Updater is a daemon responsible for examining allocated resources on a worker node. It accounts for resources that are available to be allocated to new pod on a per-zone basis, where a zone can be a Non-Uniform Memory Access (NUMA) node. The NFD Topology Updater communicates the information to nfd-master, which creates a NodeResourceTopology custom resource (CR) corresponding to all of the worker nodes in the cluster. One instance of the NFD Topology Updater runs on each node of the cluster. To enable the Topology Updater workers in NFD, set the topologyupdater variable to true in the NodeFeatureDiscovery CR, as described in the section Using the Node Feature Discovery Operator . 3.6.1. NodeResourceTopology CR When run with NFD Topology Updater, NFD creates custom resource instances corresponding to the node resource hardware topology, such as: apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: name: node1 topologyPolicies: ["SingleNUMANodeContainerLevel"] zones: - name: node-0 type: Node resources: - name: cpu capacity: 20 allocatable: 16 available: 10 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3 - name: node-1 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic2 capacity: 6 allocatable: 6 available: 6 - name: node-2 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3 3.6.2. NFD Topology Updater command line flags To view available command line flags, run the nfd-topology-updater -help command. For example, in a podman container, run the following command: USD podman run gcr.io/k8s-staging-nfd/node-feature-discovery:master nfd-topology-updater -help -ca-file The -ca-file flag is one of the three flags, together with the -cert-file and `-key-file`flags, that controls the mutual TLS authentication on the NFD Topology Updater. This flag specifies the TLS root certificate that is used for verifying the authenticity of nfd-master. Default: empty Important The -ca-file flag must be specified together with the -cert-file and -key-file flags. Example USD nfd-topology-updater -ca-file=/opt/nfd/ca.crt -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -cert-file The -cert-file flag is one of the three flags, together with the -ca-file and -key-file flags , that controls mutual TLS authentication on the NFD Topology Updater. This flag specifies the TLS certificate presented for authenticating outgoing requests. Default: empty Important The -cert-file flag must be specified together with the -ca-file and -key-file flags. Example USD nfd-topology-updater -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -ca-file=/opt/nfd/ca.crt -h, -help Print usage and exit. -key-file The -key-file flag is one of the three flags, together with the -ca-file and -cert-file flags, that controls the mutual TLS authentication on the NFD Topology Updater. This flag specifies the private key corresponding the given certificate file, or -cert-file , that is used for authenticating outgoing requests. Default: empty Important The -key-file flag must be specified together with the -ca-file and -cert-file flags. Example USD nfd-topology-updater -key-file=/opt/nfd/updater.key -cert-file=/opt/nfd/updater.crt -ca-file=/opt/nfd/ca.crt -kubelet-config-file The -kubelet-config-file specifies the path to the Kubelet's configuration file. Default: /host-var/lib/kubelet/config.yaml Example USD nfd-topology-updater -kubelet-config-file=/var/lib/kubelet/config.yaml -no-publish The -no-publish flag disables all communication with the nfd-master, making it a dry run flag for nfd-topology-updater. NFD Topology Updater runs resource hardware topology detection normally, but no CR requests are sent to nfd-master. Default: false Example USD nfd-topology-updater -no-publish 3.6.2.1. -oneshot The -oneshot flag causes the NFD Topology Updater to exit after one pass of resource hardware topology detection. Default: false Example USD nfd-topology-updater -oneshot -no-publish -podresources-socket The -podresources-socket flag specifies the path to the Unix socket where kubelet exports a gRPC service to enable discovery of in-use CPUs and devices, and to provide metadata for them. Default: /host-var/liblib/kubelet/pod-resources/kubelet.sock Example USD nfd-topology-updater -podresources-socket=/var/lib/kubelet/pod-resources/kubelet.sock -server The -server flag specifies the address of the nfd-master endpoint to connect to. Default: localhost:8080 Example USD nfd-topology-updater -server=nfd-master.nfd.svc.cluster.local:443 -server-name-override The -server-name-override flag specifies the common name (CN) which to expect from the nfd-master TLS certificate. This flag is mostly intended for development and debugging purposes. Default: empty Example USD nfd-topology-updater -server-name-override=localhost -sleep-interval The -sleep-interval flag specifies the interval between resource hardware topology re-examination and custom resource updates. A non-positive value implies infinite sleep interval and no re-detection is done. Default: 60s Example USD nfd-topology-updater -sleep-interval=1h -version Print version and exit. -watch-namespace The -watch-namespace flag specifies the namespace to ensure that resource hardware topology examination only happens for the pods running in the specified namespace. Pods that are not running in the specified namespace are not considered during resource accounting. This is particularly useful for testing and debugging purposes. A * value means that all of the pods across all namespaces are considered during the accounting process. Default: * Example USD nfd-topology-updater -watch-namespace=rte
|
[
"apiVersion: v1 kind: Namespace metadata: name: openshift-nfd labels: name: openshift-nfd openshift.io/cluster-monitoring: \"true\"",
"oc create -f nfd-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-nfd- name: openshift-nfd namespace: openshift-nfd spec: targetNamespaces: - openshift-nfd",
"oc create -f nfd-operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nfd namespace: openshift-nfd spec: channel: \"stable\" installPlanApproval: Automatic name: nfd source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f nfd-sub.yaml",
"oc project openshift-nfd",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 10m",
"apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance namespace: openshift-nfd spec: instance: \"\" # instance is empty by default topologyupdater: false # False by default operand: image: registry.redhat.io/openshift4/ose-node-feature-discovery-rhel9:v4.18 1 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - \"BMI1\" - \"BMI2\" - \"CLMUL\" - \"CMOV\" - \"CX16\" - \"ERMS\" - \"F16C\" - \"HTT\" - \"LZCNT\" - \"MMX\" - \"MMXEXT\" - \"NX\" - \"POPCNT\" - \"RDRAND\" - \"RDSEED\" - \"RDTSCP\" - \"SGX\" - \"SSE\" - \"SSE2\" - \"SSE3\" - \"SSE4.1\" - \"SSE4.2\" - \"SSSE3\" attributeWhitelist: kernel: kconfigFile: \"/path/to/kconfig\" configOpts: - \"NO_HZ\" - \"X86\" - \"DMI\" pci: deviceClassWhitelist: - \"0200\" - \"03\" - \"12\" deviceLabelFields: - \"class\" customConfig: configData: | - name: \"more.kernel.features\" matchOn: - loadedKMod: [\"example_kmod3\"]",
"oc apply -f <filename>",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 11m nfd-master-hcn64 1/1 Running 0 60s nfd-master-lnnxx 1/1 Running 0 60s nfd-master-mp6hr 1/1 Running 0 60s nfd-worker-vgcz9 1/1 Running 0 60s nfd-worker-xqbws 1/1 Running 0 60s",
"skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:<openshift_version>",
"skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:v4.12",
"{ \"Digest\": \"sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef\", }",
"skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@<image_digest> docker://<mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest>",
"skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef docker://<your-mirror-registry>/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef",
"apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance spec: operand: image: <mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest> 1 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - \"BMI1\" - \"BMI2\" - \"CLMUL\" - \"CMOV\" - \"CX16\" - \"ERMS\" - \"F16C\" - \"HTT\" - \"LZCNT\" - \"MMX\" - \"MMXEXT\" - \"NX\" - \"POPCNT\" - \"RDRAND\" - \"RDSEED\" - \"RDTSCP\" - \"SGX\" - \"SSE\" - \"SSE2\" - \"SSE3\" - \"SSE4.1\" - \"SSE4.2\" - \"SSSE3\" attributeWhitelist: kernel: kconfigFile: \"/path/to/kconfig\" configOpts: - \"NO_HZ\" - \"X86\" - \"DMI\" pci: deviceClassWhitelist: - \"0200\" - \"03\" - \"12\" deviceLabelFields: - \"class\" customConfig: configData: | - name: \"more.kernel.features\" matchOn: - loadedKMod: [\"example_kmod3\"]",
"oc apply -f <filename>",
"oc get nodefeaturediscovery nfd-instance -o yaml",
"oc get pods -n <nfd_namespace>",
"core: sleepInterval: 60s 1",
"core: sources: - system - custom",
"core: labelWhiteList: '^cpu-cpuid'",
"core: noPublish: true 1",
"sources: cpu: cpuid: attributeBlacklist: [MMX, MMXEXT]",
"sources: cpu: cpuid: attributeWhitelist: [AVX512BW, AVX512CD, AVX512DQ, AVX512F, AVX512VL]",
"sources: kernel: kconfigFile: \"/path/to/kconfig\"",
"sources: kernel: configOpts: [NO_HZ, X86, DMI]",
"sources: pci: deviceClassWhitelist: [\"0200\", \"03\"]",
"sources: pci: deviceLabelFields: [class, vendor, device]",
"sources: usb: deviceClassWhitelist: [\"ef\", \"ff\"]",
"sources: pci: deviceLabelFields: [class, vendor]",
"source: custom: - name: \"my.custom.feature\" matchOn: - loadedKMod: [\"e1000e\"] - pciId: class: [\"0200\"] vendor: [\"8086\"]",
"apiVersion: nfd.openshift.io/v1 kind: NodeFeatureRule metadata: name: example-rule spec: rules: - name: \"example rule\" labels: \"example-custom-feature\": \"true\" # Label is created if all of the rules below match matchFeatures: # Match if \"veth\" kernel module is loaded - feature: kernel.loadedmodule matchExpressions: veth: {op: Exists} # Match if any PCI device with vendor 8086 exists in the system - feature: pci.device matchExpressions: vendor: {op: In, value: [\"8086\"]}",
"oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.13.6/examples/nodefeaturerule.yaml",
"apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: name: node1 topologyPolicies: [\"SingleNUMANodeContainerLevel\"] zones: - name: node-0 type: Node resources: - name: cpu capacity: 20 allocatable: 16 available: 10 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3 - name: node-1 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic2 capacity: 6 allocatable: 6 available: 6 - name: node-2 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3",
"podman run gcr.io/k8s-staging-nfd/node-feature-discovery:master nfd-topology-updater -help",
"nfd-topology-updater -ca-file=/opt/nfd/ca.crt -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key",
"nfd-topology-updater -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -ca-file=/opt/nfd/ca.crt",
"nfd-topology-updater -key-file=/opt/nfd/updater.key -cert-file=/opt/nfd/updater.crt -ca-file=/opt/nfd/ca.crt",
"nfd-topology-updater -kubelet-config-file=/var/lib/kubelet/config.yaml",
"nfd-topology-updater -no-publish",
"nfd-topology-updater -oneshot -no-publish",
"nfd-topology-updater -podresources-socket=/var/lib/kubelet/pod-resources/kubelet.sock",
"nfd-topology-updater -server=nfd-master.nfd.svc.cluster.local:443",
"nfd-topology-updater -server-name-override=localhost",
"nfd-topology-updater -sleep-interval=1h",
"nfd-topology-updater -watch-namespace=rte"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/specialized_hardware_and_driver_enablement/psap-node-feature-discovery-operator
|
5.263. python
|
5.263. python 5.263.1. RHBA-2012:1250 - python bug fix update Updated python packages that fix a bug are now available for Red Hat Enterprise Linux 6. Python is an interpreted, interactive, object-oriented programming language. Python includes modules, classes, exceptions, high-level dynamic data types, and dynamic typing. Python supports interfaces to many system calls and libraries, as well as to various windowing systems (X11, Motif, Tk, Mac and MFC). Bug Fix BZ# 848815 As part of the fix for CVE-2012-0876, a new symbol ("XML_SetHashSalt") was added to the system libexpat library, and which Python's standard library uses within the pyexpat module. If an unpatched libexpat.so.1 was present in a directory listed in LD_LIBRARY_PATH, then attempts to use the pyexpat module (such as within yum) would fail with an ImportError exception. This update adds an RPATH directive to pyexpat to ensure that the system libexpat is used by pyexpat, regardless of whether there is an unpatched libexpat within the LD_LIBRARY_PATH, thus preventing the ImportError exception. All Python users are advised to upgrade to these updated packages, which fix this bug.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/python
|
Chapter 1. Overview of the OpenShift Data Foundation update process
|
Chapter 1. Overview of the OpenShift Data Foundation update process This chapter helps you to upgrade between the minor releases and z-streams for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. You can upgrade OpenShift Data Foundation and its components, either between minor releases like 4.15 and 4.16, or between z-stream updates like 4.16.0 and 4.16.1 by enabling automatic updates (if not done so during operator installation) or performing manual updates. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic. Extended Update Support (EUS) EUS to EUS upgrade in OpenShift Data Foundation is sequential and it is aligned with OpenShift upgrade. For more information, see Performing an EUS-to-EUS update and EUS-to-EUS update for layered products and Operators installed through Operator Lifecycle Manager . For EUS upgrade of OpenShift Container Platform and OpenShift Data Foundation, make sure that OpenShift Data Foundation is upgraded along with OpenShift Container Platform and compatibility between OpenShift Data Foundation and OpenShift Container Platform is always maintained. Example workflow of EUS upgrade: Pause the worker machine pools. Update OpenShift <4.y> OpenShift <4.y+1>. Update OpenShift Data Foundation <4.y> OpenShift Data Foundation <4.y+1>. Update OpenShift <4.y+1> OpenShift <4.y+2>. Update to OpenShift Data Foundation <4.y+2>. Unpause the worker machine pools. Note You can update to ODF <4.y+2> either before or after worker machine pools are unpaused. Important When you update OpenShift Data Foundation in external mode, make sure that the Red Had Ceph Storage and OpenShift Data Foundation versions are compatible. For more information about supported Red Had Ceph Storage version in external mode, refer to Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . Provide the required OpenShift Data Foundation version in the checker to see the supported Red Had Ceph version corresponding to the version in use. You also need to upgrade the different parts of Red Hat OpenShift Data Foundation in the following order for both internal and external mode deployments: Update OpenShift Container Platform according to the Updating clusters documentation for OpenShift Container Platform. Update Red Hat OpenShift Data Foundation. To prepare a disconnected environment for updates , see Operators guide to using Operator Lifecycle Manager on restricted networks to be able to update OpenShift Data Foundation as well as Local Storage Operator when in use. For updating between minor releases , see Updating Red Hat OpenShift Data Foundation 4.15 to 4.16 . For updating between z-stream releases , see Updating Red Hat OpenShift Data Foundation 4.16.x to 4.16.y . For updating external mode deployments , you must also perform the steps from section Updating the Red Hat OpenShift Data Foundation external secret . If you use local storage, then update the Local Storage operator . See Checking for Local Storage Operator deployments if you are unsure. Important If you have an existing setup of OpenShift Data Foundation 4.12 with disaster recovery (DR) enabled, ensure to update all your clusters in the environment at the same time and avoid updating a single cluster. This is to avoid any potential issues and maintain best compatibility. It is also important to maintain consistency across all OpenShift Data Foundation DR instances. Update considerations Review the following important considerations before you begin. The Red Hat OpenShift Container Platform version is the same as Red Hat OpenShift Data Foundation. See the Interoperability Matrix for more information about supported combinations of OpenShift Container Platform and Red Hat OpenShift Data Foundation. To know whether your cluster was deployed in internal or external mode, refer to the knowledgebase article on How to determine if ODF cluster has storage in internal or external mode . The Local Storage Operator is fully supported only when the Local Storage Operator version matches the Red Hat OpenShift Container Platform version. Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article .
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/updating_openshift_data_foundation/overview-of-the-openshift-data-foundation-update-process_rhodf
|
Chapter 9. Sources
|
Chapter 9. Sources The updated Red Hat Ceph Storage source code packages are available at the following location: For Red Hat Enterprise Linux 8: http://ftp.redhat.com/redhat/linux/enterprise/8Base/en/RHCEPH/SRPMS/ For Red Hat Enterprise Linux 9: http://ftp.redhat.com/redhat/linux/enterprise/9Base/en/RHCEPH/SRPMS/
| null |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/5.3_release_notes/sources
|
Chapter 3. Upgrading Red Hat Enterprise Linux on Satellite or Capsule
|
Chapter 3. Upgrading Red Hat Enterprise Linux on Satellite or Capsule Satellite and Capsule are supported on both Red Hat Enterprise Linux 8 and Red Hat Enterprise Linux 9. You can use the following methods to upgrade your Satellite or Capsule operating system from Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9: Leapp in-place upgrade With Leapp, you can upgrade your Satellite or Capsule in-place therefore it is faster but imposes a downtime on the services. Migration by using cloning The Red Hat Enterprise Linux 8 system remains operational during the migration using cloning, which reduces the downtime. You cannot use cloning for Capsule Server migrations. Migration by using backup and restore The Red Hat Enterprise Linux 8 system remains operational during the migration using cloning, which reduces the downtime. You can use backup and restore for migrating both Satellite and Capsule operating system from Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9. 3.1. Upgrading Satellite or Capsule to RHEL 9 in-place by using Leapp You can use the Leapp tool to upgrade as well as to help detect and resolve issues that could prevent you from upgrading successfully. Prerequisites Review known issues before you begin an upgrade. For more information, see Known issues in Red Hat Satellite 6.16 . If you use an HTTP proxy in your environment, configure the Subscription Manager to use the HTTP proxy for connection. For more information, see Troubleshooting in Upgrading from RHEL 8 to RHEL 9 . Satellite 6.16 or Capsule 6.16 running on Red Hat Enterprise Linux 8. If you are upgrading Capsule Servers, enable and synchronize the following repositories to Satellite Server, and add them to the lifecycle environment and content view that is attached to your Capsule Server: Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) : rhel-9-for-x86_64-baseos-rpms for the major version: x86_64 9 . rhel-9-for-x86_64-baseos-rpms for the latest supported minor version: x86_64 9. Y , where Y represents the minor version. For information about the latest supported minor version for in-place upgrades, see Supported upgrade paths in Upgrading from RHEL 8 to RHEL 9 . Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) : rhel-9-for-x86_64-appstream-rpms for the major version: x86_64 9 . rhel-9-for-x86_64-appstream-rpms for the latest supported minor version: x86_64 9. Y , where Y represents the minor version. For information about the latest supported minor versions for in-place upgrades, see Supported upgrade paths in Upgrading from RHEL 8 to RHEL 9 . Red Hat Satellite Capsule 6.16 for RHEL 9 x86_64 RPMs : satellite-capsule-6.16-for-rhel-9-x86_64-rpms Red Hat Satellite Maintenance 6.16 for RHEL 9 x86_64 RPMs : satellite-maintenance-6.16-for-rhel-9-x86_64-rpms You require access to Red Hat Enterprise Linux and Satellite packages. Obtain the ISO files for Red Hat Enterprise Linux 9 and Satellite 6.16. For more information, see Downloading the Binary DVD Images in Installing Satellite Server in a disconnected network environment . Procedure Install required packages: Set up the required repositories to perform the upgrade in a disconnected environment. Important The required repositories cannot be served from a locally mounted ISO but must be delivered over the network from a different machine. Leapp completes part of the upgrade in a container that has no access to additional ISO mounts. Add the following lines to /etc/yum.repos.d/rhel9.repo : Add the following lines to /etc/yum.repos.d/satellite.repo: Let Leapp analyze your system: The first run will most likely report issues and inhibit the upgrade. Examine the report in the /var/log/leapp/leapp-report.txt file, answer all questions by using leapp answer , and manually resolve other reported problems. Run leapp preupgrade again and make sure that it does not report any more issues. Let Leapp create the upgrade environment: Reboot the system to start the upgrade. After the system reboots, a live system conducts the upgrade, reboots to fix SELinux labels and then reboots into the final Red Hat Enterprise Linux 9 system. Wait for Leapp to finish the upgrade. You can monitor the process with journalctl : Unlock packages: Verify the post-upgrade state. For more information, see Verifying the post-upgrade state in Upgrading from RHEL 8 to RHEL 9 . Perform post-upgrade tasks on the RHEL 9 system. For more information, see Performing post-upgrade tasks on the RHEL 9 system in Upgrading from RHEL 8 to RHEL 9 . Lock packages: Change SELinux to enforcing mode. For more information, see Changing SELinux mode to enforcing in Upgrading from RHEL 8 to RHEL 9 . Additional resources For more information on customizing the Leapp upgrade for your environment, see Customizing your Red Hat Enterprise Linux in-place upgrade . For more information, see How to in-place upgrade an offline / disconnected RHEL 8 machine to RHEL 9 with Leapp? 3.2. Migrating Satellite to RHEL 9 by using cloning You can clone your existing Satellite Server from Red Hat Enterprise Linux 8 to a freshly installed Red Hat Enterprise Linux 9 system. Create a backup of the existing Satellite Server, which you then clone on the new Red Hat Enterprise Linux 9 system. Note You cannot use cloning for Capsule Server backups. Procedure Perform a full backup of your Satellite Server. This is the source Red Hat Enterprise Linux 8 server that you are migrating. For more information, see Performing a full backup of Satellite Server in Administering Red Hat Satellite . Deploy a system with Red Hat Enterprise Linux 9 and the same configuration as the source server. This is the target server. Clone the server. Clone configures hostname for the target server. For more information, see Cloning Satellite Server in Administering Red Hat Satellite 3.3. Migrating Satellite or Capsule to RHEL 9 using backup and restore You can migrate your existing Satellite Server and Capsule Server from Red Hat Enterprise Linux 8 to a freshly installed Red Hat Enterprise Linux 9 system. The migration involves creating a backup of the existing Satellite Server and Capsule Server, which you then restore on the new Red Hat Enterprise Linux 9 system. Procedure Perform a full backup of your Satellite Server or Capsule. This is the source Red Hat Enterprise Linux 8 server that you are migrating. For more information, see Performing a full backup of Satellite Server or Capsule Server in Administering Red Hat Satellite . Deploy a system with Red Hat Enterprise Linux 9 and the same hostname and configuration as the source server. This is the target server. Restore the backup. Restore does not significantly alter the target system and requires additional configuration. For more information, see Restoring Satellite Server or Capsule Server from a backup in Administering Red Hat Satellite . Restore the Capsule Server backup. For more information, see Restoring Satellite Server or Capsule Server from a backup in Administering Red Hat Satellite .
|
[
"satellite-maintain packages install leapp leapp-upgrade-el8toel9",
"[BaseOS] name=rhel-9-for-x86_64-baseos-rpms baseurl=http:// server.example.com /rhel9/BaseOS/ [AppStream] name=rhel-9-for-x86_64-appstream-rpms baseurl=http:// server.example.com /rhel9/AppStream/",
"[satellite-6.16-for-rhel-9-x86_64-rpms] name=satellite-6.16-for-rhel-9-x86_64-rpms baseurl=http:// server.example.com /sat6/Satellite/ [satellite-maintenance-6.16-for-rhel-9-x86_64-rpms] name=satellite-maintenance-6.16-for-rhel-9-x86_64-rpms baseurl=http:// server.example.com /sat6/Maintenance/",
"leapp preupgrade --no-rhsm --enablerepo BaseOS --enablerepo AppStream --enablerepo satellite-6.16-for-rhel-9-x86_64-rpms --enablerepo satellite-maintenance-6.16-for-rhel-9-x86_64-rpms",
"leapp upgrade --no-rhsm --enablerepo BaseOS --enablerepo AppStream --enablerepo satellite-6.16-for-rhel-9-x86_64-rpms --enablerepo satellite-maintenance-6.16-for-rhel-9-x86_64-rpms",
"journalctl -u leapp_resume -f",
"satellite-maintain packages unlock",
"satellite-maintain packages lock"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/upgrading_disconnected_red_hat_satellite_to_6.16/upgrading_el_on_satellite_or_proxy_upgrading-disconnected
|
Appendix A. The Device Mapper
|
Appendix A. The Device Mapper The Device Mapper is a kernel driver that provides a generic framework for volume management. It provides a generic way of creating mapped devices, which may be used as logical volumes. It does not specifically know about volume groups or metadata formats. The Device Mapper provides the foundation for a number of higher-level technologies. In addition to LVM, device-mapper multipath and the dmraid command use the Device Mapper. The user interface to the Device Mapper is the ioctl system call. LVM logical volumes are activated using the Device Mapper. Each logical volume is translated into a mapped device, Each segment translates into a line in the mapping table that describes the device. The Device Mapper provides linear mapping, striped mapping, and error mapping, amongst others. Two disks can be concatenated into one logical volume with a pair of linear mappings, one for each disk. The dmsetup command is a command line wrapper for communication with the Device Mapper. It provides complete access to the ioctl commands through the libdevmapper command. For general system information about LVM devices, you may find the dmsetup info command to be useful. For information about the options and capabilities of the dmsetup command, see the dmsetup (8) man page.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/device_mapper
|
Preface
|
Preface HawtIO provides enterprise monitoring tools for viewing and managing Red Hat HawtIO-enabled applications. It is a web-based console accessed from a browser to monitor and manage a running HawtIO-enabled container. HawtIO is based on the open source HawtIO software ( https://hawt.io/ ). HawtIO Diagnostic Console Guide describes how to manage applications with HawtIO. The audience for this guide are Apache Camel eco-system developers and administrators. This guide assumes familiarity with Apache Camel and the processing requirements for your organization. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/hawtio_diagnostic_console_guide/preface
|
35.4. Examining Log Files
|
35.4. Examining Log Files Log Viewer can be configured to display an alert icon beside lines that contain key alert words and a warning icon beside lines that contain key warning words. To add alerts words, select Edit => Preferences from the pulldown menu, and click on the Alerts tab. Click the Add button to add an alert word. To delete an alert word, select the word from the list, and click Delete . The alert icon is displayed to the left of the lines that contains any of the alert words. Figure 35.4. Alerts To add warning words, select Edit => Preferences from the pull-down menu, and click on the Warnings tab. Click the Add button to add a warning word. To delete a warning word, select the word from the list, and click Delete . The warning icon is displayed to the left of the lines that contains any of the warning words. Figure 35.5. Warning
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/log_files-examining_log_files
|
Chapter 10. Installing a private cluster on GCP
|
Chapter 10. Installing a private cluster on GCP In OpenShift Container Platform version 4.12, you can install a private cluster into an existing VPC on Google Cloud Platform (GCP). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 10.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 10.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 10.2.1. Private clusters in GCP To create a private cluster on Google Cloud Platform (GCP), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. The cluster still requires access to internet to access the GCP APIs. The following items are not required or created when you install a private cluster: Public subnets Public network load balancers, which support public ingress A public DNS zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private DNS zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. Because it is not possible to limit access to external load balancers based on source tags, the private cluster uses only internal load balancers to allow access to internal instances. The internal load balancer relies on instance groups rather than the target pools that the network load balancers use. The installation program creates instance groups for each zone, even if there is no instance in that group. The cluster IP address is internal only. One forwarding rule manages both the Kubernetes API and machine config server ports. The backend service is comprised of each zone's instance group and, while it exists, the bootstrap instance group. The firewall uses a single rule that is based on only internal source ranges. 10.2.1.1. Limitations No health check for the Machine config server, /healthz , runs because of a difference in load balancer functionality. Two internal load balancers cannot share a single IP address, but two network load balancers can share a single external IP address. Instead, the health of an instance is determined entirely by the /readyz check on port 6443. 10.3. About using a custom VPC In OpenShift Container Platform 4.12, you can deploy a cluster into an existing VPC in Google Cloud Platform (GCP). If you do, you must also use existing subnets within the VPC and routing rules. By deploying OpenShift Container Platform into an existing GCP VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself. 10.3.1. Requirements for using your VPC The installation program will no longer create the following components: VPC Subnets Cloud router Cloud NAT NAT IP addresses If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VPC options like DHCP, so you must do so before you install the cluster. Your VPC and subnets must meet the following characteristics: The VPC must be in the same GCP project that you deploy the OpenShift Container Platform cluster to. To allow access to the internet from the control plane and compute machines, you must configure cloud NAT on the subnets to allow egress to it. These machines do not have a public address. Even if you do not require access to the internet, you must allow egress to the VPC network to obtain the installation program and images. Because multiple cloud NATs cannot be configured on the shared subnets, the installation program cannot configure it. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist and belong to the VPC that you specified. The subnet CIDRs belong to the machine CIDR. You must provide a subnet to deploy the cluster control plane and compute machines to. You can use the same subnet for both machine types. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. 10.3.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or Ingress rules. The GCP credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage, and nodes. 10.3.3. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is preserved by firewall rules that reference the machines in your cluster by the cluster's infrastructure ID. Only traffic within the cluster is allowed. If you deploy multiple clusters to the same VPC, the following components might share access between clusters: The API, which is globally available with an external publishing strategy or available throughout the network in an internal publishing strategy Debugging tools, such as ports on VM instances that are open to the machine CIDR for SSH and ICMP access 10.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 10.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 10.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 10.7. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 10.7.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 10.7.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 10.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 10.7.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 10.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 10.7.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 10.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. If you are installing on GCP into a shared virtual private cloud (VPC), credentialsMode must be set to Passthrough . Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 10.7.1.4. Additional Google Cloud Platform (GCP) configuration parameters Additional GCP configuration parameters are described in the following table: Table 10.4. Additional GCP parameters Parameter Description Values platform.gcp.network The name of the existing Virtual Private Cloud (VPC) where you want to deploy your cluster. If you want to deploy your cluster into a shared VPC, you must set platform.gcp.networkProjectID with the name of the GCP project that contains the shared VPC. String. platform.gcp.networkProjectID Optional. The name of the GCP project that contains the shared VPC where you want to deploy your cluster. String. platform.gcp.projectID The name of the GCP project where the installation program installs the cluster. String. platform.gcp.region The name of the GCP region that hosts your cluster. Any valid region name, such as us-central1 . platform.gcp.controlPlaneSubnet The name of the existing subnet where you want to deploy your control plane machines. The subnet name. platform.gcp.computeSubnet The name of the existing subnet where you want to deploy your compute machines. The subnet name. platform.gcp.createFirewallRules Optional. Set this value to Disabled if you want to create and manage your firewall rules using network tags. By default, the cluster will automatically create and manage the firewall rules that are required for cluster communication. Your service account must have roles/compute.networkAdmin and roles/compute.securityAdmin privileges in the host project to perform these tasks automatically. If your service account does not have the roles/dns.admin privilege in the host project, it must have the dns.networks.bindPrivateDNSZone permission. Enabled or Disabled . The default value is Enabled . platform.gcp.publicDNSZone.project Optional. The name of the project that contains the public DNS zone. If you set this value, your service account must have the roles/dns.admin privilege in the specified project. If you do not set this value, it defaults to gcp.projectId . The name of the project that contains the public DNS zone. platform.gcp.publicDNSZone.id Optional. The ID or name of an existing public DNS zone. The public DNS zone domain must match the baseDomain parameter. If you do not set this value, the installation program will use a public DNS zone in the service project. The public DNS zone name. platform.gcp.privateDNSZone.project Optional. The name of the project that contains the private DNS zone. If you set this value, your service account must have the roles/dns.admin privilege in the host project. If you do not set this value, it defaults to gcp.projectId . The name of the project that contains the private DNS zone. platform.gcp.privateDNSZone.id Optional. The ID or name of an existing private DNS zone. If you do not set this value, the installation program will create a private DNS zone in the service project. The private DNS zone name. platform.gcp.licenses A list of license URLs that must be applied to the compute images. Important The licenses parameter is a deprecated field and nested virtualization is enabled by default. It is not recommended to use this field. Any license available with the license API , such as the license to enable nested virtualization . You cannot use this parameter with a mechanism that generates pre-built images. Using a license URL forces the installation program to copy the source image before use. platform.gcp.defaultMachinePlatform.zones The availability zones where the installation program creates machines. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . platform.gcp.defaultMachinePlatform.osDisk.diskSizeGB The size of the disk in gigabytes (GB). Any size between 16 GB and 65536 GB. platform.gcp.defaultMachinePlatform.osDisk.diskType The GCP disk type . Either the default pd-ssd or the pd-standard disk type. The control plane nodes must be the pd-ssd disk type. Compute nodes can be either type. platform.gcp.defaultMachinePlatform.osImage.project Optional. By default, the installation program downloads and installs the RHCOS image that is used to boot control plane and compute machines. You can override the default behavior by specifying the location of a custom RHCOS image for the installation program to use for both types of machines. String. The name of GCP project where the image is located. platform.gcp.defaultMachinePlatform.osImage.name The name of the custom RHCOS image for the installation program to use to boot control plane and compute machines. If you use platform.gcp.defaultMachinePlatform.osImage.project , this field is required. String. The name of the RHCOS image. platform.gcp.defaultMachinePlatform.tags Optional. Additional network tags to add to the control plane and compute machines. One or more strings, for example network-tag1 . platform.gcp.defaultMachinePlatform.type The GCP machine type for control plane and compute machines. The GCP machine type, for example n1-standard-4 . platform.gcp.defaultMachinePlatform.osDisk.encryptionKey.kmsKey.name The name of the customer managed encryption key to be used for machine disk encryption. The encryption key name. platform.gcp.defaultMachinePlatform.osDisk.encryptionKey.kmsKey.keyRing The name of the Key Management Service (KMS) key ring to which the KMS key belongs. The KMS key ring name. platform.gcp.defaultMachinePlatform.osDisk.encryptionKey.kmsKey.location The GCP location in which the KMS key ring exists. The GCP location. platform.gcp.defaultMachinePlatform.osDisk.encryptionKey.kmsKey.projectID The ID of the project in which the KMS key ring exists. This value defaults to the value of the platform.gcp.projectID parameter if it is not set. The GCP project ID. platform.gcp.defaultMachinePlatform.osDisk.encryptionKey.kmsKeyServiceAccount The GCP service account used for the encryption request for control plane and compute machines. If absent, the Compute Engine default service account is used. For more information about GCP service accounts, see Google's documentation on service accounts . The GCP service account email, for example <service_account_name>@<project_id>.iam.gserviceaccount.com . controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.name The name of the customer managed encryption key to be used for control plane machine disk encryption. The encryption key name. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.keyRing For control plane machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.location For control plane machines, the GCP location in which the key ring exists. For more information about KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.projectID For control plane machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKeyServiceAccount The GCP service account used for the encryption request for control plane machines. If absent, the Compute Engine default service account is used. For more information about GCP service accounts, see Google's documentation on service accounts . The GCP service account email, for example <service_account_name>@<project_id>.iam.gserviceaccount.com . controlPlane.platform.gcp.osDisk.diskSizeGB The size of the disk in gigabytes (GB). This value applies to control plane machines. Any integer between 16 and 65536. controlPlane.platform.gcp.osDisk.diskType The GCP disk type for control plane machines. Control plane machines must use the pd-ssd disk type, which is the default. controlPlane.platform.gcp.osImage.project Optional. By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image that is used to boot control plane machines. You can override the default behavior by specifying the location of a custom RHCOS image for the installation program to use for control plane machines only. String. The name of GCP project where the image is located. controlPlane.platform.gcp.osImage.name The name of the custom RHCOS image for the installation program to use to boot control plane machines. If you use controlPlane.platform.gcp.osImage.project , this field is required. String. The name of the RHCOS image. controlPlane.platform.gcp.tags Optional. Additional network tags to add to the control plane machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.tags parameter for control plane machines. One or more strings, for example control-plane-tag1 . controlPlane.platform.gcp.type The GCP machine type for control plane machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.type parameter. The GCP machine type, for example n1-standard-4 . controlPlane.platform.gcp.zones The availability zones where the installation program creates control plane machines. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . compute.platform.gcp.osDisk.encryptionKey.kmsKey.name The name of the customer managed encryption key to be used for compute machine disk encryption. The encryption key name. compute.platform.gcp.osDisk.encryptionKey.kmsKey.keyRing For compute machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. compute.platform.gcp.osDisk.encryptionKey.kmsKey.location For compute machines, the GCP location in which the key ring exists. For more information about KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. compute.platform.gcp.osDisk.encryptionKey.kmsKey.projectID For compute machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. compute.platform.gcp.osDisk.encryptionKey.kmsKeyServiceAccount The GCP service account used for the encryption request for compute machines. If this value is not set, the Compute Engine default service account is used. For more information about GCP service accounts, see Google's documentation on service accounts . The GCP service account email, for example <service_account_name>@<project_id>.iam.gserviceaccount.com . compute.platform.gcp.osDisk.diskSizeGB The size of the disk in gigabytes (GB). This value applies to compute machines. Any integer between 16 and 65536. compute.platform.gcp.osDisk.diskType The GCP disk type for compute machines. Either the default pd-ssd or the pd-standard disk type. compute.platform.gcp.osImage.project Optional. By default, the installation program downloads and installs the RHCOS image that is used to boot compute machines. You can override the default behavior by specifying the location of a custom RHCOS image for the installation program to use for compute machines only. String. The name of GCP project where the image is located. compute.platform.gcp.osImage.name The name of the custom RHCOS image for the installation program to use to boot compute machines. If you use compute.platform.gcp.osImage.project , this field is required. String. The name of the RHCOS image. compute.platform.gcp.tags Optional. Additional network tags to add to the compute machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.tags parameter for compute machines. One or more strings, for example compute-network-tag1 . compute.platform.gcp.type The GCP machine type for compute machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.type parameter. The GCP machine type, for example n1-standard-4 . compute.platform.gcp.zones The availability zones where the installation program creates compute machines. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . 10.7.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 10.5. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 10.7.3. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 10.1. Machine series A2 A3 C2 C2D C3 C3D C4 E2 M1 N1 N2 N2D N4 Tau T2D 10.7.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . As part of the installation process, you specify the custom machine type in the install-config.yaml file. Sample install-config.yaml file with a custom machine type compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3 10.7.5. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 6 - control-plane-tag1 - control-plane-tag2 osImage: 7 project: example-project-name name: example-image-name replicas: 3 compute: 8 9 - hyperthreading: Enabled 10 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 11 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 12 - compute-tag1 - compute-tag2 osImage: 13 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 14 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 15 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 16 region: us-central1 17 defaultMachinePlatform: tags: 18 - global-tag1 - global-tag2 osImage: 19 project: example-project-name name: example-image-name network: existing_vpc 20 controlPlaneSubnet: control_plane_subnet 21 computeSubnet: compute_subnet 22 pullSecret: '{"auths": ...}' 23 fips: false 24 sshKey: ssh-ed25519 AAAA... 25 publish: Internal 26 1 14 16 17 23 Required. The installation program prompts you for this value. 2 8 If you do not provide these parameters and values, the installation program provides the default value. 3 9 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 10 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 5 11 Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information about granting the correct permissions for your service account, see "Machine management" "Creating compute machine sets" "Creating a compute machine set on GCP". 6 12 18 Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter will apply to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. 7 13 19 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image for the installation program to use to boot control plane and compute machines. The project and name parameters under platform.gcp.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the project and name parameters under controlPlane.platform.gcp.osImage or compute.platform.gcp.osImage are set, they override the platform.gcp.defaultMachinePlatform.osImage parameters. 15 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 20 Specify the name of an existing VPC. 21 Specify the name of the existing subnet to deploy the control plane machines to. The subnet must belong to the VPC that you specified. 22 Specify the name of the existing subnet to deploy the compute machines to. The subnet must belong to the VPC that you specified. 24 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. 25 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 26 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . Additional resources Enabling customer-managed encryption keys for a compute machine set 10.7.6. Create an Ingress Controller with global access on GCP You can create an Ingress Controller that has global access to a Google Cloud Platform (GCP) cluster. Global access is only available to Ingress Controllers using internal load balancers. Prerequisites You created the install-config.yaml and complete any modifications to it. Procedure Create an Ingress Controller with global access on a new GCP cluster. Change to the directory that contains the installation program and create a manifest file: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-ingress-default-ingresscontroller.yaml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml Example output cluster-ingress-default-ingresscontroller.yaml Open the cluster-ingress-default-ingresscontroller.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration you want: Sample clientAccess configuration to Global apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService 1 Set gcp.clientAccess to Global . 2 Global access is only available to Ingress Controllers using internal load balancers. 10.7.7. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 10.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 10.9. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 10.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 10.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 10.12. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
|
[
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 6 - control-plane-tag1 - control-plane-tag2 osImage: 7 project: example-project-name name: example-image-name replicas: 3 compute: 8 9 - hyperthreading: Enabled 10 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 11 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 12 - compute-tag1 - compute-tag2 osImage: 13 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 14 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 15 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 16 region: us-central1 17 defaultMachinePlatform: tags: 18 - global-tag1 - global-tag2 osImage: 19 project: example-project-name name: example-image-name network: existing_vpc 20 controlPlaneSubnet: control_plane_subnet 21 computeSubnet: compute_subnet 22 pullSecret: '{\"auths\": ...}' 23 fips: false 24 sshKey: ssh-ed25519 AAAA... 25 publish: Internal 26",
"./openshift-install create manifests --dir <installation_directory> 1",
"touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1",
"ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml",
"cluster-ingress-default-ingresscontroller.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_gcp/installing-gcp-private
|
6.7 Release Notes
|
6.7 Release Notes Red Hat Enterprise Linux 6 Release Notes for Red Hat Enterprise Linux 6.7 Edition 7 Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_release_notes/index
|
Chapter 2. Bulk importing GitHub repositories
|
Chapter 2. Bulk importing GitHub repositories Important These features are for Technology Preview only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features, see Technology Preview Features Scope . Red Hat Developer Hub can automate GitHub repositories onboarding and track their import status. 2.1. Enabling and giving access to the Bulk Import feature You can enable the Bulk Import feature for users and give them the necessary permissions to access it. Prerequisites You have configured GitHub integration . Procedure The Bulk Import plugins are installed but disabled by default. To enable the ./dynamic-plugins/dist/janus-idp-backstage-plugin-bulk-import-backend-dynamic and ./dynamic-plugins/dist/janus-idp-backstage-plugin-bulk-import plugins, edit your dynamic-plugins.yaml with the following content: dynamic-plugins.yaml fragment plugins: - package: ./dynamic-plugins/dist/janus-idp-backstage-plugin-bulk-import-backend-dynamic disabled: false - package: ./dynamic-plugins/dist/janus-idp-backstage-plugin-bulk-import disabled: false See Installing and viewing dynamic plugins . Configure the required bulk.import RBAC permission for the users who are not administrators as follows: rbac-policy.csv fragment p, role:default/bulk-import, bulk.import, use, allow g, user:default/ <your_user> , role:default/bulk-import Note that only Developer Hub administrators or users with the bulk.import permission can use the Bulk Import feature. See Permission policies in Red Hat Developer Hub . Verification The sidebar displays a Bulk Import option. The Bulk Import page shows a list of Added Repositories . 2.2. Importing multiple GitHub repositories In Red Hat Developer Hub, you can select your GitHub repositories and automate their onboarding to the Developer Hub catalog. Prerequisites You have enabled the Bulk Import feature and gave access to it . Procedure Click Bulk Import in the left sidebar. Click the Add button in the top-right corner to see the list of all repositories accessible from the configured GitHub integrations. From the Repositories view, you can select any repository, or search for any accessible repositories. For each repository selected, a catalog-info.yaml is generated. From the Organizations view, you can select any organization by clicking Select in the third column. This option allows you to select one or more repositories from the selected organization. Click Preview file to view or edit the details of the pull request for each repository. Review the pull request description and the catalog-info.yaml file content. Optional: when the repository has a .github/CODEOWNERS file, you can select the Use CODEOWNERS file as Entity Owner checkbox to use it, rather than having the content-info.yaml contain a specific entity owner. Click Save . Click Create pull requests . At this point, a set of dry-run checks runs against the selected repositories to ensure they meet the requirements for import, such as: Verifying that there is no entity in the Developer Hub catalog with the name specified in the repository catalog-info.yaml Verifying that the repository is not empty Verifying that the repository contains a .github/CODEOWNERS file if the Use CODEOWNERS file as Entity Owner checkbox is selected for that repository If any errors occur, the pull requests are not created, and you see a Failed to create PR error message detailing the issues. To view more details about the reasons, click Edit . If there are no errors, the pull requests are created, and you are redirected to the list of added repositories. Review and merge each pull request that creates a catalog-info.yml file. Verification The Added repositories list displays the repositories you imported, each with an appropriate status: either Waiting for approval or Added . For each Waiting for approval import job listed, there is a corresponding pull request adding the catalog-info.yaml file in the corresponding repository. 2.3. Managing the added repositories You can oversee and manage the repositories that are imported to the Developer Hub. Prerequisites You have imported GitHub repositories . Procedure Click Bulk Import in the left sidebar to display all the current repositories that are being tracked as Import jobs, along with their status. Added The repository is added to the Developer Hub catalog after the import pull request is merged or if the repository already contained a catalog-info.yaml file during the bulk import. Note that it may take a few minutes for the entities to be available in the catalog. Waiting for approval There is an open pull request adding a catalog-info.yaml file to the repository. You can: Click the pencil icon on the right to see details about the pull request or edit the pull request content right from Developer Hub. Delete the Import job, this action closes the import PR as well. To transition the Import job to the Added state, merge the import pull request from the Git repository. Empty Developer Hub is unable to determine the import job status because the repository is imported from other sources but does not have a catalog-info.yaml file and lacks any import pull request adding it. Note After an import pull request is merged, the import status is marked as Added in the list of Added Repositories, but it might take a few seconds for the corresponding entities to appear in the Developer Hub Catalog. A location added through other sources (like statically in an app-config.yaml file, dynamically when enabling GitHub discovery , or registered manually using the "Register an existing component" page) might show up in the Bulk Import list of Added Repositories if the following conditions are met: The target repository is accessible from the configured GitHub integrations. The location URL points to a catalog-info.yaml file at the root of the repository default branch. 2.4. Understanding the Bulk Import audit Logs The Bulk Import backend plugin adds the following events to the Developer Hub audit logs. See Audit Logs in Red Hat Developer Hub for more information on how to configure and view audit logs. Bulk Import Events : BulkImportUnknownEndpoint Tracks requests to unknown endpoints. BulkImportPing Tracks GET requests to the /ping endpoint, which allows us to make sure the bulk import backend is up and running. BulkImportFindAllOrganizations Tracks GET requests to the /organizations endpoint, which returns the list of organizations accessible from all configured GitHub Integrations. BulkImportFindRepositoriesByOrganization Tracks GET requests to the /organizations/:orgName/repositories endpoint, which returns the list of repositories for the specified organization (accessible from any of the configured GitHub Integrations). BulkImportFindAllRepositories Tracks GET requests to the /repositories endpoint, which returns the list of repositories accessible from all configured GitHub Integrations. BulkImportFindAllImports Tracks GET requests to the /imports endpoint, which returns the list of existing import jobs along with their statuses. BulkImportCreateImportJobs Tracks POST requests to the /imports endpoint, which allows to submit requests to bulk-import one or many repositories into the Developer Hub catalog, by eventually creating import pull requests in the target repositories. BulkImportFindImportStatusByRepo Tracks GET requests to the /import/by-repo endpoint, which fetches details about the import job for the specified repository. BulkImportDeleteImportByRepo Tracks DELETE requests to the /import/by-repo endpoint, which deletes any existing import job for the specified repository, by closing any open import pull request that could have been created. Example bulk import audit logs { "actor": { "actorId": "user:default/myuser", "hostname": "localhost", "ip": "::1", "userAgent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36" }, "eventName": "BulkImportFindAllOrganizations", "isAuditLog": true, "level": "info", "message": "'get /organizations' endpoint hit by user:default/myuser", "meta": {}, "plugin": "bulk-import", "request": { "body": {}, "method": "GET", "params": {}, "query": { "pagePerIntegration": "1", "sizePerIntegration": "5" }, "url": "/api/bulk-import/organizations?pagePerIntegration=1&sizePerIntegration=5" }, "response": { "status": 200 }, "service": "backstage", "stage": "completion", "status": "succeeded", "timestamp": "2024-08-26 16:41:02" }
|
[
"plugins: - package: ./dynamic-plugins/dist/janus-idp-backstage-plugin-bulk-import-backend-dynamic disabled: false - package: ./dynamic-plugins/dist/janus-idp-backstage-plugin-bulk-import disabled: false",
"p, role:default/bulk-import, bulk.import, use, allow g, user:default/ <your_user> , role:default/bulk-import",
"{ \"actor\": { \"actorId\": \"user:default/myuser\", \"hostname\": \"localhost\", \"ip\": \"::1\", \"userAgent\": \"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36\" }, \"eventName\": \"BulkImportFindAllOrganizations\", \"isAuditLog\": true, \"level\": \"info\", \"message\": \"'get /organizations' endpoint hit by user:default/myuser\", \"meta\": {}, \"plugin\": \"bulk-import\", \"request\": { \"body\": {}, \"method\": \"GET\", \"params\": {}, \"query\": { \"pagePerIntegration\": \"1\", \"sizePerIntegration\": \"5\" }, \"url\": \"/api/bulk-import/organizations?pagePerIntegration=1&sizePerIntegration=5\" }, \"response\": { \"status\": 200 }, \"service\": \"backstage\", \"stage\": \"completion\", \"status\": \"succeeded\", \"timestamp\": \"2024-08-26 16:41:02\" }"
] |
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/getting_started_with_red_hat_developer_hub/bulk-importing-github-repositories
|
Chapter 1. Overview
|
Chapter 1. Overview 1.1. Major changes in RHEL 9.5 Security With the new sudo RHEL system role , you can consistently manage sudo configuration at scale across your RHEL systems. The OpenSSL TLS toolkit is upgraded to version 3.2.2. OpenSSL now supports certificate compression extension (RFC 8879) and Brainpool curves have been added to the TLS 1.3 protocol (RFC 8734). The ca-certificates program now provides trusted CA roots in the OpenSSL directory format. The crypto-policies packages have been updated to extend its control to algorithm selection in Java. The SELinux policy now provides a boolean that allows QEMU Guest Agent to execute confined commands. The NSS cryptographic toolkit packages have been rebased to upstream version 3.101. See New features - Security for more information. Dynamic programming languages, web and database servers Later versions of the following Application Streams are now available: Apache HTTP Server 2.4.62 Node.js 22 See New features - Dynamic programming languages, web and database servers for more information. Compilers and development tools Updated system toolchain The following system toolchain components have been updated: GCC 11.5 Annobin 12.70 Updated performance tools and debuggers The following performance tools and debuggers have been updated: GDB 14.2 Valgrind 3.23.0 SystemTap 5.1 elfutils 0.191 libabigail 2.5 Updated performance monitoring tools The following performance monitoring tools have been updated: PCP 6.2.2 Grafana 10.2.6 Updated compiler toolsets The following compiler toolsets have been updated: GCC Toolset 14 (new) LLVM Toolset 18.1.8 Rust Toolset 1.79.0 Go Toolset 1.22 See New features - Compilers and development tools for more information. The web console With the new File browser provided by the cockpit-files package, you can manage files and directories in the RHEL web console. See New features - The web console for more information. RHEL in cloud environments You can now use the OpenTelemetry framework to collect telemetry data, such as logs, metrics, and traces, from RHEL cloud instances, and to send the data to external analytics services, such as AWS CloudWatch. See New features - RHEL in cloud environments for more information. 1.2. In-place upgrade In-place upgrade from RHEL 8 to RHEL 9 The supported in-place upgrade paths currently are: From RHEL 8.10 to RHEL 9.5 on the following architectures: 64-bit Intel and AMD IBM POWER 9 (little endian) and later IBM Z architectures, excluding z13 From RHEL 8.8 to RHEL 9.2, and RHEL 8.10 to RHEL 9.4 on the following architectures: 64-bit Intel, AMD, and ARM IBM POWER 9 (little endian) and later IBM Z architectures, excluding z13 From RHEL 8.6 to RHEL 9.0 and RHEL 8.8 to RHEL 9.2 on systems with SAP HANA For more information, see Supported in-place upgrade paths for Red Hat Enterprise Linux . For instructions on performing an in-place upgrade, see Upgrading from RHEL 8 to RHEL 9 . If you are upgrading to RHEL 9.2 with SAP HANA, ensure that the system is certified for SAP before the upgrade. For instructions on performing an in-place upgrade on systems with SAP environments, see How to in-place upgrade SAP environments from RHEL 8 to RHEL 9 . Notable enhancements include: Properly close file descriptors for executed shell commands to prevent the common Too many opened files error. Introduce in-place upgrade for systems with the Satellite Server version 6.16. Target the GA channel repositories by default unless a different channel is specified by using the --channel leapp option. Update the default kernel command line during the upgrade process so that kernels installed later automatically contain expected parameters. In-place upgrade from RHEL 7 to RHEL 9 It is not possible to perform an in-place upgrade directly from RHEL 7 to RHEL 9. However, you can perform an in-place upgrade from RHEL 7 to RHEL 8 and then perform a second in-place upgrade to RHEL 9. For more information, see Upgrading from RHEL 7 to RHEL 8 . 1.3. Red Hat Customer Portal Labs Red Hat Customer Portal Labs is a set of tools in a section of the Customer Portal available at https://access.redhat.com/labs/ . The applications in Red Hat Customer Portal Labs can help you improve performance, quickly troubleshoot issues, identify security problems, and quickly deploy and configure complex applications. Some of the most popular applications are: Registration Assistant Kickstart Generator Red Hat Product Certificates Red Hat CVE Checker Kernel Oops Analyzer Red Hat Code Browser VNC Configurator Red Hat OpenShift Container Platform Update Graph Red Hat Satellite Upgrade Helper JVM Options Configuration Tool Load Balancer Configuration Tool Red Hat OpenShift Data Foundation Supportability and Interoperability Checker Ansible Automation Platform Upgrade Assistant Ceph Placement Groups (PGs) per Pool Calculator Yum Repository Configuration Helper Red Hat Out of Memory Analyzer 1.4. Additional resources Capabilities and limits of Red Hat Enterprise Linux 9 as compared to other versions of the system are available in the Knowledgebase article Red Hat Enterprise Linux technology capabilities and limits . Information regarding the Red Hat Enterprise Linux life cycle is provided in the Red Hat Enterprise Linux Life Cycle document. The Package manifest document provides a package listing for RHEL 9, including licenses and application compatibility levels. Application compatibility levels are explained in the Red Hat Enterprise Linux 9: Application Compatibility Guide document. Major differences between RHEL 8 and RHEL 9 , including removed functionality, are documented in Considerations in adopting RHEL 9 . Instructions on how to perform an in-place upgrade from RHEL 8 to RHEL 9 are provided by the document Upgrading from RHEL 8 to RHEL 9 . The Red Hat Insights service, which enables you to proactively identify, examine, and resolve known technical issues, is available with all RHEL subscriptions. For instructions on how to install the Red Hat Insights client and register your system to the service, see the Red Hat Insights page. Note Release notes include a reference to their tracking ticket. If the ticket is not public, the reference is not linked. [1] [1] Release notes include a reference to their tracking ticket. If the ticket is not public, the reference is not linked.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.5_release_notes/overview
|
Chapter 3. Customizing the boot menu
|
Chapter 3. Customizing the boot menu This section provides information about what the Boot menu customization is, and how to customize it. Prerequisites: For information about downloading and extracting Boot images, see Extracting Red Hat Enterprise Linux boot images The Boot menu customization involves the following high-level tasks: Complete the prerequisites. Customize the Boot menu. Create a custom Boot image. 3.1. Customizing the boot menu The Boot menu is the menu which appears after you boot your system using an installation image. Normally, this menu allows you to choose between options such as Install Red Hat Enterprise Linux , Boot from local drive or Rescue an installed system . To customize the Boot menu, you can: Customize the default options. Add more options. Change the visual style (color and background). An installation media consists of ISOLINUX and GRUB2 boot loaders. The ISOLINUX boot loader is used on systems with BIOS firmware, and the GRUB2 boot loader is used on systems with UEFI firmware. Both the boot loaders are present on all Red Hat images for AMD64 and Intel 64 systems. Customizing the boot menu options can especially be useful with Kickstart. Kickstart files must be provided to the installer before the installation begins. Normally, this is done by manually editing one of the existing boot options to add the inst.ks= boot option. You can add this option to one of the pre-configured entries, if you edit boot loader configuration files on the media. 3.2. Systems with bios firmware The ISOLINUX boot loader is used on systems with BIOS firmware. Figure 3.1. ISOLINUX Boot Menu The isolinux/isolinux.cfg configuration file on the boot media contains directives for setting the color scheme and the menu structure (entries and submenus). In the configuration file, the default menu entry for Red Hat Enterprise Linux, Test this media & Install Red Hat Enterprise Linux 9 , is defined in the following block: Where: menu label - determines how the entry will be named in the menu. The ^ character determines its keyboard shortcut (the m key). menu default - provides a default selection, even though it is not the first option in the list. kernel - loads the installer kernel. In most cases it should not be changed. append - contains additional kernel options. The initrd= and inst.stage2 options are mandatory; you can add others. For information about the options that are applicable to Anaconda refer to Types of boot options . One of the notable options is inst.ks= , which allows you to specify a location of a Kickstart file. You can place a Kickstart file on the boot ISO image and use the inst.ks= option to specify its location; for example, you can place a kickstart.ks file into the image's root directory and use inst.ks=hd:LABEL=RHEL-9-BaseOS-x86_64:/kickstart.ks . You can also use dracut options which are listed on the dracut.cmdline(7) man page on your system. Important When using a disk label to refer to a certain drive (as seen in the inst.stage2=hd:LABEL=RHEL-9-BaseOS-x86_64 option above), replace all spaces with \x20 . Other important options which are not included in the menu entry definition are: timeout - determines the time for which the boot menu is displayed before the default menu entry is automatically used. The default value is 600 , which means the menu is displayed for 60 seconds. Setting this value to 0 disables the timeout option. Note Setting the timeout to a low value such as 1 is useful when performing a headless installation. This helps to avoid the default timeout to finish. menu begin and menu end - determines a start and end of a submenu block, allowing you to add additional options such as troubleshooting and grouping them in a submenu. A simple submenu with two options (one to continue and one to go back to the main menu) looks similar to the following: The submenu entry definitions are similar to normal menu entries, but grouped between menu begin and menu end statements. The menu exit line in the second option exits the submenu and returns to the main menu. menu background - the menu background can either be a solid color (see menu color below), or an image in a PNG, JPEG or LSS16 format. When using an image, make sure that its dimensions correspond to the resolution set using the set resolution statement. Default dimensions are 640x480. menu color - determines the color of a menu element. The full format is: Most important parts of this command are: element - determines which element the color will apply to. foreground and background - determine the actual colors. The colors are described using an # AARRGGBB notation in hexadecimal format determines opacity: 00 for fully transparent. ff for fully opaque. menu help textfile - creates a menu entry which, when selected, displays a help text file. Additional resources For a complete list of ISOLINUX configuration file options, see the Syslinux Wiki . 3.3. Systems with uefi firmware The GRUB2 boot loader is used on systems with UEFI firmware. The EFI/BOOT/grub.cfg configuration file on the boot media contains a list of preconfigured menu entries and other directives which controls the appearance and the Boot menu functionality. In the configuration file, the default menu entry for Red Hat Enterprise Linux ( Test this media & install Red Hat Enterprise Linux 9 ) is defined in the following block: Where: menuentry - Defines the title of the entry. It is specified in single or double quotes ( ' or " ). You can use the --class option to group menu entries into different classes , which can then be styled differently using GRUB2 themes. Note As shown in the above example, you must enclose each menu entry definition in curly braces ( {} ). linuxefi - Defines the kernel that boots ( /images/pxeboot/vmlinuz in the above example) and the other additional options, if any. You can customize these options to change the behavior of the boot entry. For details about the options that are applicable to Anaconda , see Kickstart boot options . One of the notable options is inst.ks= , which allows you to specify a location of a Kickstart file. You can place a Kickstart file on the boot ISO image and use the inst.ks= option to specify its location; for example, you can place a kickstart.ks file into the image's root directory and use inst.ks=hd:LABEL=RHEL-9-BaseOS-x86_64:/kickstart.ks . You can also use dracut options which are listed on the dracut.cmdline(7) man page on your system. Important When using a disk label to refer to a certain drive (as seen in the inst.stage2=hd:LABEL=RHEL-9-BaseOS-x86_64 option above), replace all spaces with \x20 . initrdefi - location of the initial RAM disk (initrd) image to be loaded. Other options used in the grub.cfg configuration file are: set timeout - determines how long is the boot menu displayed before the default menu entry is automatically used. The default value is 60 , which means the menu is displayed for 60 seconds. Setting this value to -1 disables the timeout completely. Note Setting the timeout to 0 is useful when performing a headless installation, because this setting immediately activates the default boot entry. submenu - A submenu block allows you to create a sub-menu and group some entries under it, instead of displaying them in the main menu. The Troubleshooting submenu in the default configuration contains entries for rescuing an existing system. The title of the entry is in single or double quotes ( ' or " ). The submenu block contains one or more menuentry definitions as described above, and the entire block is enclosed in curly braces ( {} ). For example: set default - Determines the default entry. The entry numbers start from 0 . If you want to make the third entry the default one, use set default=2 and so on. theme - determines the directory which contains GRUB2 theme files. You can use the themes to customize visual aspects of the boot loader - background, fonts, and colors of specific elements. Additional resources For additional information about customizing the boot menu, see GNU GRUB Manual 2.00 . For more general information about GRUB2 , see Managing, monitoring and updating the kernel .
|
[
"label check menu label Test this ^media & install Red Hat Enterprise Linux 9. menu default kernel vmlinuz append initrd=initrd.img inst.stage2=hd:LABEL=RHEL-9-BaseOS-x86_64 rd.live.check quiet",
"menu begin ^Troubleshooting menu title Troubleshooting label rescue menu label ^Rescue a Red Hat Enterprise Linux system kernel vmlinuz append initrd=initrd.img inst.stage2=hd:LABEL=RHEL-9-BaseOS-x86_64 rescue quiet menu separator label returntomain menu label Return to ^main menu menu exit menu end",
"menu color element ansi foreground background shadow",
"menuentry 'Test this media & install Red Hat Enterprise Linux 9' --class fedora --class gnu-linux --class gnu --class os { linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-9-BaseOS-x86_64 rd.live.check quiet initrdefi /images/pxeboot/initrd.img }",
"submenu 'Submenu title' { menuentry 'Submenu option 1' { linuxefi /images/vmlinuz inst.stage2=hd:LABEL=RHEL-9-BaseOS-x86_64 xdriver=vesa nomodeset quiet initrdefi /images/pxeboot/initrd.img } menuentry 'Submenu option 2' { linuxefi /images/vmlinuz inst.stage2=hd:LABEL=RHEL-9-BaseOS-x86_64 rescue quiet initrdefi /images/initrd.img } }"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/customizing_anaconda/customizing-the-boot-menu_customizing-anaconda
|
Chapter 42. Google Pubsub
|
Chapter 42. Google Pubsub Since Camel 2.19 Both producer and consumer are supported. The Google Pubsub component provides access to the Cloud Pub/Sub Infrastructure via the Google Cloud Java Client for Google Cloud Pub/Sub . 42.1. Dependencies When using google-pubsub with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-google-pubsub-starter</artifactId> </dependency> 42.2. URI Format The Google Pubsub Component uses the following URI format: Destination Name can be either a topic or a subscription name. 42.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 42.3.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 42.3.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 42.4. Component Options The Google Pubsub component supports 10 options, which are listed below. Name Description Default Type authenticate (common) Use Credentials when interacting with PubSub service (no authentication is required when using emulator). true boolean endpoint (common) Endpoint to use with local Pub/Sub emulator. String serviceAccountKey (common) The Service account key that can be used as credentials for the PubSub publisher/subscriber. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean synchronousPullRetryableCodes (consumer) Comma-separated list of additional retryable error codes for synchronous pull. By default the PubSub client library retries ABORTED, UNAVAILABLE, UNKNOWN. String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean publisherCacheSize (producer) Maximum number of producers to cache. This could be increased if you have producers for lots of different topics. int publisherCacheTimeout (producer) How many milliseconds should each producer stay alive in the cache. int autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean publisherTerminationTimeout (advanced) How many milliseconds should a producer be allowed to terminate. int 42.5. Endpoint Options The Google Pubsub endpoint is configured using URI syntax: with the following path and query parameters: 42.5.1. Path Parameters (2 parameters) Name Description Default Type projectId (common) Required The Google Cloud PubSub Project Id. String destinationName (common) Required The Destination Name. For the consumer this will be the subscription name, while for the producer this will be the topic name. String 42.5.2. Query Parameters (15 parameters) Name Description Default Type authenticate (common) Use Credentials when interacting with PubSub service (no authentication is required when using emulator). true boolean loggerId (common) Logger ID to use when a match to the parent route required. String serviceAccountKey (common) The Service account key that can be used as credentials for the PubSub publisher/subscriber. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String ackMode (consumer) AUTO = exchange gets ack'ed/nack'ed on completion. NONE = downstream process has to ack/nack explicitly. Enum values: AUTO NONE AUTO AckMode concurrentConsumers (consumer) The number of parallel streams consuming from the subscription. 1 Integer maxAckExtensionPeriod (consumer) Set the maximum period a message ack deadline will be extended. Value in seconds. 3600 int maxMessagesPerPoll (consumer) The max number of messages to receive from the server in a single API call. 1 Integer synchronousPull (consumer) Synchronously pull batches of messages. false boolean bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean messageOrderingEnabled (producer (advanced)) Should message ordering be enabled. false boolean pubsubEndpoint (producer (advanced)) Pub/Sub endpoint to use. Required when using message ordering, and ensures that messages are received in order even when multiple publishers are used. String serializer (producer (advanced)) Autowired A custom GooglePubsubSerializer to use for serializing message payloads in the producer. GooglePubsubSerializer 42.6. Message Headers The Google Pubsub component supports 5 message header(s), which are listed below: Name Description Default Type CamelGooglePubsubMessageId (common) Constant: MESSAGE_ID The ID of the message, assigned by the server when the message is published. String CamelGooglePubsubMsgAckId (consumer) Constant: ACK_ID The ID used to acknowledge the received message. String CamelGooglePubsubPublishTime (consumer) Constant: PUBLISH_TIME The time at which the message was published. Timestamp CamelGooglePubsubAttributes (common) Constant: ATTRIBUTES The attributes of the message. Map CamelGooglePubsubOrderingKey (producer) Constant: ORDERING_KEY If non-empty, identifies related messages for which publish order should be respected. String 42.7. Producer Endpoints Producer endpoints can accept and deliver to PubSub individual and grouped exchanges alike. Grouped exchanges have Exchange.GROUPED_EXCHANGE property set. Google PubSub expects the payload to be byte[] array, Producer endpoints will send: String body as byte[] encoded as UTF-8 byte[] body as is Everything else will be serialised into byte[] array A Map set as message header GooglePubsubConstants.ATTRIBUTES will be sent as PubSub attributes. Google PubSub supports ordered message delivery. To enable this set set the options messageOrderingEnabled to true, and the pubsubEndpoint to a GCP region. When producing messages set the message header GooglePubsubConstants.ORDERING_KEY . This will be set as the PubSub orderingKey for the message. More information in Ordering messages . Once exchange has been delivered to PubSub the PubSub Message ID will be assigned to the header GooglePubsubConstants.MESSAGE_ID . 42.8. Consumer Endpoints Google PubSub will redeliver the message if it has not been acknowledged within the time period set as a configuration option on the subscription. The component will acknowledge the message once exchange processing has been completed. If the route throws an exception, the exchange is marked as failed and the component will NACK the message - it will be redelivered immediately. To ack/nack the message the component uses Acknowledgement ID stored as header GooglePubsubConstants.ACK_ID . If the header is removed or tampered with, the ack will fail and the message will be redelivered again after the ack deadline. 42.9. Message Body The consumer endpoint returns the content of the message as byte[] - exactly as the underlying system sends it. It is up for the route to convert/unmarshall the contents. 42.10. Authentication Configuration By default this component aquires credentials using GoogleCredentials.getApplicationDefault() . This behavior can be disabled by setting authenticate option to false , in which case requests to Google API will be made without authentication details. This is only desirable when developing against an emulator. This behavior can be altered by supplying a path to a service account key file. 42.11. Rollback and Redelivery The rollback for Google PubSub relies on the idea of the Acknowledgement Deadline - the time period where Google PubSub expects to receive the acknowledgement. If the acknowledgement has not been received, the message is redelivered. Google provides an API to extend the deadline for a message. More information in Google PubSub Documentation . So, rollback is effectively a deadline extension API call with zero value - i.e. deadline is reached now and message can be redelivered to the consumer. It is possible to delay the message redelivery by setting the acknowledgement deadline explicitly for the rollback by setting the message header GooglePubsubConstants.ACK_DEADLINE to the value in seconds. 42.12. Spring Boot Auto-Configuration The component supports 11 options, which are listed below. Name Description Default Type camel.component.google-pubsub.authenticate Use Credentials when interacting with PubSub service (no authentication is required when using emulator). true Boolean camel.component.google-pubsub.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.google-pubsub.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.google-pubsub.enabled Whether to enable auto configuration of the google-pubsub component. This is enabled by default. Boolean camel.component.google-pubsub.endpoint Endpoint to use with local Pub/Sub emulator. String camel.component.google-pubsub.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.google-pubsub.publisher-cache-size Maximum number of producers to cache. This could be increased if you have producers for lots of different topics. Integer camel.component.google-pubsub.publisher-cache-timeout How many milliseconds should each producer stay alive in the cache. Integer camel.component.google-pubsub.publisher-termination-timeout How many milliseconds should a producer be allowed to terminate. Integer camel.component.google-pubsub.service-account-key The Service account key that can be used as credentials for the PubSub publisher/subscriber. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String camel.component.google-pubsub.synchronous-pull-retryable-codes Comma-separated list of additional retryable error codes for synchronous pull. By default the PubSub client library retries ABORTED, UNAVAILABLE, UNKNOWN. String
|
[
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-google-pubsub-starter</artifactId> </dependency>",
"google-pubsub://project-id:destinationName?[options]",
"google-pubsub:projectId:destinationName"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-google-pubsub-component-starter
|
Chapter 20. Modifying user and group attributes in IdM
|
Chapter 20. Modifying user and group attributes in IdM In Identity Management (IdM), information is stored as LDAP attributes. When you create a user entry in IdM, the entry is automatically assigned certain LDAP object classes. These object classes define what attributes are available to the user entry. For more information about the default user objects classes and how they are organized, see the table below . Table 20.1. Default IdM user object classes Object classes Description ipaobject, ipasshuser IdM object classes person, organizationalperson, inetorgperson, inetuser, posixAccount Person object classes krbprincipalaux, krbticketpolicyaux Kerberos object classes mepOriginEntry Managed entries (template) object classes As an administrator, you can modify the list of user object classes as well as the format of the attributes. For example, you can specify how many characters are allowed in a user name. The way that user and group object classes and attributes are organized in IdM is called the IdM user and group schema. 20.1. The default IdM user attributes A user entry contains attributes. The values of certain attributes are set automatically, based on defaults, unless you set a specific value yourself. For other attributes, you have to set the values manually. Certain attributes, such as First name , require a value, whereas others, such as Street address , do not. As an administrator, you can configure the values generated or used by the default attributes. For more information, see the Default IdM user attributes table below. Table 20.2. Default IdM user attributes Web UI field Command-line option Required, optional, or default? User login username Required First name --first Required Last name --last Required Full name --cn Optional Display name --displayname Optional Initials --initials Default Home directory --homedir Default GECOS field --gecos Default Shell --shell Default Kerberos principal --principal Default Email address --email Optional Password --password Optional. Note that the script prompts for a new password, rather than accepting a value with the argument. User ID number --uid Default Group ID number --gidnumber Default Street address --street Optional City --city Optional State/Province --state Optional Zip code --postalcode Optional Telephone number --phone Optional Mobile telephone number --mobile Optional Pager number --pager Optional Fax number --fax Optional Organizational unit --orgunit Optional Job title --title Optional Manager --manager Optional Car license --carlicense Optional --noprivate Optional SSH Keys --sshpubkey Optional Additional attributes --addattr Optional Department Number --departmentnumber Optional Employee Number --employeenumber Optional Employee Type --employeetype Optional Preferred Language --preferredlanguage Optional You can also add any attributes available in the Default IdM user object classes , even if no Web UI or command-line argument for that attribute exists. 20.2. Considerations in changing the default user and group schema User and group accounts are created with a predefined set of LDAP object classes applied to them. While the standard IdM-specific LDAP object classes and attributes cover most deployment scenarios, you can create custom object classes with custom attributes for user and group entries. When you modify object classes, IdM provides the following validation: All of the object classes and their specified attributes must be known to the LDAP server. All default attributes that are configured for the entry must be supported by the configured object classes. However, the IdM schema validation has limitations. The IdM server does not check that the defined user or group object classes contain all of the required object classes for IdM entries. For example, all IdM entries require the ipaobject object class. However, if the user or group schema is changed, the server does not check if this object class is included. If the object class is accidentally deleted and you then try to add a new user, the attempt fails. Also, all object class changes are atomic, not incremental. You must define the entire list of default object classes every time a change occurs. For example, you may decide to create a custom object class to store employee information such as birthdays and employment start dates. In this scenario, you cannot simply add the custom object class to the list. Instead, you must set the entire list of current default object classes plus the new object class. If you do not include the existing default object classes when you update the configuration, the current settings are overwritten. This causes serious performance problems. Note After you modify the list of default object classes, new user and group entries will contain the custom object classes but the old entries are not modified. 20.3. Modifying user object classes in the IdM Web UI This procedure describes how you can use the IdM Web UI to modify object classes for future Identity Management (IdM) user entries. As a result, these entries will have different attributes than the current user entries do. Prerequisites You are logged in as the IdM administrator. Procedure Open the IPA Server tab. Select the Configuration subtab. Scroll to the User Options area. Keep all the object classes listed in the Default IdM user object classes table. Important If any object classes required by IdM are not included, then subsequent attempts to add a user entry will fail with object class violations. At the bottom of the users area, click Add for a new field to appear. Enter the name of the user object class you want to add. Click Save at the top of the Configuration page. 20.4. Modifying user object classes in the IdM CLI This procedure describes how you can use the Identity Management (IdM) CLI to modify user object classes for future IdM user entries. As a result, these entries will have different attributes than the current user entries do. Prerequisites You have enabled the brace expansion feature: You are logged in as the IdM administrator. Procedure Use the ipa config-mod command to modify the current schema. For example, to add top and mailRecipient object classes to the future user entries: The command adds all the ten user object classes that are native to IdM as well as the two new ones, top and mailRecipient . Important The information passed with the config-mod command overwrites the values. If any user object classes required by IdM are not included, then subsequent attempts to add a user entry will fail with object class violations. Note Alternatively, you can add a user object class by using the ipa config-mod --addattr ipauserobjectclasses= <user object class> command. In this way, you do not risk forgetting a native IdM class in the list. For example, to add the mailRecipient user object class without overwriting the current configuration, enter ipa config-mod --addattr ipauserobjectclasses=mailRecipient . Analogously, to remove only the mailRecipient object class, enter ipa config-mod --delattr ipauserobjectclasses=mailRecipient . 20.5. Modifying group object classes in the IdM Web UI Identity Management (IdM) has the following default group object classes: top groupofnames nestedgroup ipausergroup ipaobject This procedure describes how you can use the IdM Web UI to add additional group object classes for future Identity Management (IdM) user group entries. As a result, these entries will have different attributes than the current the group entries do. Prerequisites You are logged in as the IdM administrator. Procedure Open the IPA Server tab. Select the Configuration subtab. Locate the Group Options area. Keep the default IdM group object classes. Important If any group object classes required by IdM are not included, then subsequent attempts to add a group entry will fail with object class violations. Click Add for a new field to appear. Enter the name of the group object class you want to add. Click Save at the top of the Configuration page. 20.6. Modifying group object classes in the IdM CLI Identity Management (IdM) has the following default group object classes: top groupofnames nestedgroup ipausergroup ipaobject This procedure describes how you can use the IdM Web UI to add additional group object classes for future Identity Management (IdM) user group entries. As a result, these entries will have different attributes than the current the group entries do. Prerequisites You have enabled the brace expansion feature: You are logged in as the IdM administrator. Procedure Use the ipa config-mod command to modify the current schema. For example, to add ipasshuser and employee group object classes to the future user entries: The command adds all the default group object classes as well as the two new ones, ipasshuser and employeegroup . Important If any group object classes required by IdM are not included, then subsequent attempts to add a group entry will fail with object class violations. Note Instead of the comma-separated list inside curly braces with no spaces allowed that is used in the example above, you can use the --groupobjectclasses argument repeatedly. 20.7. Default user and group attributes in IdM Identity Management (IdM) uses a template when it creates new entries. The template for users is more specific than the template for groups. IdM uses default values for several core attributes for IdM user accounts. These defaults can define actual values for user account attributes, such as the home directory location, or they can define the formats of attribute values, such as the user name length. The template also defines the object classes assigned to users. For groups, the template only defines the assigned object classes. In the IdM LDAP directory, these default definitions are all contained in a single configuration entry for the IdM server, cn=ipaconfig,cn=etc,dc=example,dc=com . You can modify the configuration of default user parameters in IdM by using the ipa config-mod command. The table below summarizes some of the key parameters, the command-line options that you can use with ipa config-mod to modify them, and the parameter descriptions. Table 20.3. Default user parameters Web UI field Command-line option Description Maximum user name length --maxusername` Sets the maximum number of characters for user names. Default: 32. Root for home directories --homedirectory Sets the default directory for user home directories. Default: /home . Default shell --defaultshell Sets the default shell for users. Default: /bin/sh . Default user group --defaultgroup Sets the default group for newly created accounts. Default: ipausers . Default e-mail domain --emaildomain Sets the email domain for creating addresses based on user accounts. Default: server domain. Search time limit --searchtimelimit Sets the maximum time in seconds for a search before returning results. Search size limit --searchrecordslimit Sets the maximum number of records to return in a search. User search fields --usersearch Defines searchable fields in user entries, impacting server performance if too many attributes are set. Group search fields --groupsearch Defines searchable fields in group entries. Certificate subject base Sets the base DN for creating subject DNs for client certificates during setup. Default user object classes --userobjectclasses Defines object classes for creating user accounts. Must provide a complete list as it overwrites the existing one. Default group object classes --groupobjectclasses Defines object classes for creating group accounts. Must provide a complete list. Password expiration notification --pwdexpnotify Defines the number of days before a password expires for sending a notification. Password plug-in features Sets the format of allowable passwords for users. 20.8. Viewing and modifying user and group configuration in the IdM Web UI You can view and modify the configuration of the default user and group attributes in the Identity Management (IdM) Web UI. Prerequisites You are logged in as IdM admin . Procedure Open the IPA Server tab. Select the Configuration subtab. The User Options section has multiple fields you can review and edit. For example, to change the default shell for future IdM users from /bin/sh to /bin/bash , locate the Default shell field, and replace /bin/sh with /bin/bash . In the Group Options section, you can only review and edit the Group search fields field. Click the Save button at the top of the screen. The newly saved configuration will be applied to future IdM user and group accounts. The current accounts remain unchanged. 20.9. Viewing and modifying user and group configuration in the IdM CLI You can view and modify the configuration of the current or default user and group attributes in the Identity Management (IdM) CLI. Prerequisites You have the IdM admin credentials. Procedure The ipa config-show command displays the most common attribute settings. Use the --all option for a complete list: Use the ipa config-mod command to modify an attribute. For example, to change the default shell for future IdM users from /bin/sh to /bin/bash , enter: For more ipa config-mod options, see the Default user parameters table. The new configuration will be applied to future IdM user and group accounts. The current accounts remain unchanged. 20.10. Additional resources Managing Directory Server attributes and values
|
[
"set -o braceexpand",
"[bjensen@server ~]USD ipa config-mod --userobjectclasses={person,organizationalperson,inetorgperson,inetuser,posixaccount,krbprincipalaux,krbticketpolicyaux,ipaobject,ipasshuser,mepOriginEntry,top,mailRecipient}",
"set -o braceexpand",
"[bjensen@server ~]USD ipa config-mod --groupobjectclasses={top,groupofnames,nestedgroup,ipausergroup,ipaobject,ipasshuser,employeegroup}",
"[bjensen@server ~]USD ipa config-show --all dn: cn=ipaConfig,cn=etc,dc=example,dc=com Maximum username length: 32 Home directory base: /home Default shell: /bin/sh Default users group: ipausers Default e-mail domain: example.com Search time limit: 2 Search size limit: 100 User search fields: uid,givenname,sn,telephonenumber,ou,title Group search fields: cn,description Enable migration mode: FALSE Certificate Subject base: O=EXAMPLE.COM Default group objectclasses: top, groupofnames, nestedgroup, ipausergroup, ipaobject Default user objectclasses: top, person, organizationalperson, inetorgperson, inetuser, posixaccount, krbprincipalaux, krbticketpolicyaux, ipaobject, ipasshuser Password Expiration Notification (days): 4 Password plugin features: AllowNThash SELinux user map order: guest_u:s0USDxguest_u:s0USDuser_u:s0USDstaff_u:s0-s0:c0.c1023USDunconfined_u:s0-s0:c0.c1023 Default SELinux user: unconfined_u:s0-s0:c0.c1023 Default PAC types: MS-PAC, nfs:NONE cn: ipaConfig objectclass: nsContainer, top, ipaGuiConfig, ipaConfigObject",
"[bjensen@server ~]USD ipa config-mod --defaultshell \"/bin/bash\""
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/modifying-user-and-group-attributes-in-idm_configuring-and-managing-idm
|
Chapter 78. OpenTelemetryTracing schema reference
|
Chapter 78. OpenTelemetryTracing schema reference Used in: KafkaBridgeSpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec The type property is a discriminator that distinguishes use of the OpenTelemetryTracing type from JaegerTracing . It must have the value opentelemetry for the type OpenTelemetryTracing . Property Description type Must be opentelemetry . string
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-OpenTelemetryTracing-reference
|
Chapter 4. The ext3 File System
|
Chapter 4. The ext3 File System The ext3 file system is essentially an enhanced version of the ext2 file system. These improvements provide the following advantages: Availability After an unexpected power failure or system crash (also called an unclean system shutdown ), each mounted ext2 file system on the machine must be checked for consistency by the e2fsck program. This is a time-consuming process that can delay system boot time significantly, especially with large volumes containing a large number of files. During this time, any data on the volumes is unreachable. It is possible to run fsck -n on a live filesystem. However, it will not make any changes and may give misleading results if partially written metadata is encountered. If LVM is used in the stack, another option is to take an LVM snapshot of the filesystem and run fsck on it instead. Finally, there is the option to remount the filesystem as read only. All pending metadata updates (and writes) are then forced to the disk prior to the remount. This ensures the filesystem is in a consistent state, provided there is no corruption. It is now possible to run fsck -n . The journaling provided by the ext3 file system means that this sort of file system check is no longer necessary after an unclean system shutdown. The only time a consistency check occurs using ext3 is in certain rare hardware failure cases, such as hard drive failures. The time to recover an ext3 file system after an unclean system shutdown does not depend on the size of the file system or the number of files; rather, it depends on the size of the journal used to maintain consistency. The default journal size takes about a second to recover, depending on the speed of the hardware. Note The only journaling mode in ext3 supported by Red Hat is data=ordered (default). Data Integrity The ext3 file system prevents loss of data integrity in the event that an unclean system shutdown occurs. The ext3 file system allows you to choose the type and level of protection that your data receives. With regard to the state of the file system, ext3 volumes are configured to keep a high level of data consistency by default. Speed Despite writing some data more than once, ext3 has a higher throughput in most cases than ext2 because ext3's journaling optimizes hard drive head motion. You can choose from three journaling modes to optimize speed, but doing so means trade-offs in regards to data integrity if the system was to fail. Note The only journaling mode in ext3 supported by Red Hat is data=ordered (default). Easy Transition It is easy to migrate from ext2 to ext3 and gain the benefits of a robust journaling file system without reformatting. For more information on performing this task, see Section 4.2, "Converting to an ext3 File System" . Note Red Hat Enterprise Linux 7 provides a unified extN driver. It does this by disabling the ext2 and ext3 configurations and instead uses ext4.ko for these on-disk formats. This means that kernel messages will always refer to ext4 regardless of the ext file system used. 4.1. Creating an ext3 File System After installation, it is sometimes necessary to create a new ext3 file system. For example, if a new disk drive is added to the system, you may want to partition the drive and use the ext3 file system. Format the partition or LVM volume with the ext3 file system using the mkfs.ext3 utility: Replace block_device with the path to a block device. For example, /dev/sdb1 , /dev/disk/by-uuid/05e99ec8-def1-4a5e-8a9d-5945339ceb2a , or /dev/my-volgroup/my-lv . Label the file system using the e2label utility: Configuring UUID It is also possible to set a specific UUID for a file system. To specify a UUID when creating a file system, use the -U option: Replace UUID with the UUID you want to set: for example, 7cd65de3-e0be-41d9-b66d-96d749c02da7 . Replace device with the path to an ext3 file system to have the UUID added to it: for example, /dev/sda8 . To change the UUID of an existing file system, see Section 25.8.3.2, "Modifying Persistent Naming Attributes" Additional Resources The mkfs.ext3 (8) man page The e2label (8) man page
|
[
"mkfs.ext3 block_device",
"e2label block_device volume_label",
"mkfs.ext3 -U UUID device"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/ch-ext3
|
Chapter 9. Redundant Array of Independent Disks (RAID)
|
Chapter 9. Redundant Array of Independent Disks (RAID) 9.1. What is RAID? The basic idea behind RAID is to combine multiple small, inexpensive disk drives into an array to accomplish performance or redundancy goals not attainable with one large and expensive drive. This array of drives appears to the computer as a single logical storage unit or drive. RAID is a method in which information is spread across several disks. RAID uses techniques such as disk striping (RAID Level 0), disk mirroring (RAID level 1), and disk striping with parity (RAID Level 5) to achieve redundancy, lower latency and/or to increase bandwidth for reading or writing to disks, and to maximize the ability to recover from hard disk crashes. The underlying concept of RAID is that data may be distributed across each drive in the array in a consistent manner. To do this, the data must first be broken into consistently-sized chunks (often 32K or 64K in size, although different sizes can be used). Each chunk is then written to a hard drive in the RAID array according to the RAID level used. When the data is to be read, the process is reversed, giving the illusion that the multiple drives in the array are actually one large drive.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Redundant_Array_of_Independent_Disks_RAID
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/troubleshooting_openshift_data_foundation/making-open-source-more-inclusive
|
14.2. CS.cfg Files
|
14.2. CS.cfg Files The runtime properties of a Certificate System subsystem are governed by a set of configuration parameters. These parameters are stored in a file that is read by the server during startup, CS.cfg . The CS.cfg , an ASCII file, is created and populated with the appropriate configuration parameters when a subsystem is first installed. The way the instance functions are modified is by making changes through the subsystem console, which is the recommended method. The changes made in the administrative console are reflected in the configuration file. It is also possible to edit the CS.cfg configuration file directly, and in some cases this is the easiest way to manage the subsystem. 14.2.1. Locating the CS.cfg File Each instance of a Certificate System subsystem has its own configuration file, CS.cfg . The contents of the file for each subsystem instance is different depending on the way the subsystem was configured, additional settings and configuration (like adding new profiles or enabling self-tests), and the type of subsystem. The CS.cfg file is located in the configuration directory for the instance. For example: 14.2.2. Editing the Configuration File Warning Do not edit the configuration file directly without being familiar with the configuration parameters or without being sure that the changes are acceptable to the server. The Certificate System fails to start if the configuration file is modified incorrectly. Incorrect configuration can also result in data loss. To modify the CS.cfg file: Stop the subsystem instance. OR (if using nuxwdog watchdog ) The configuration file is stored in the cache when the instance is started. Any changes made to the instance through the Console are changed in the cached version of the file. When the server is stopped or restarted, the configuration file stored in the cache is written to disk. Stop the server before editing the configuration file or the changes will be overwritten by the cached version when the server is stopped. Open the /var/lib/pki/ instance_name / subsystem_type /conf directory. Open the CS.cfg file in a text editor. Edit the parameters in the file, and save the changes. Start the subsystem instance. OR (if using nuxwdog watchdog ) 14.2.3. Overview of the CS.cfg Configuration File Each subsystem instances has its own main configuration file, CS.cfg , which contains all of the settings for the instance, such as plug-ins and Java classes for configuration. The parameters and specific settings are different depending on the type of subsystem, but, in a general sense, the CS.cfg file defines these parts of the subsystem instance: Basic subsystem instance information, like its name, port assignments, instance directory, and hostname Logging Plug-ins and methods to authenticate to the instance's user directory (authorization) The security domain to which the instance belongs Subsystem certificates Other subsystems used by the subsystem instance Database types and instances used by the subsystem Settings for PKI-related tasks, like the key profiles in the TKS, the certificate profiles in the CA, and the required agents for key recovery in the KRA Many of the configuration parameters (aside from the ones for PKI tasks) are very much the same between the CA, OCSP, KRA, and TKS because they all use a Java-based console, so configuration settings which can be managed in the console have similar parameters. The CS.cfg file a basic parameter=value format. In the CS.cfg file, many of the parameter blocks have descriptive comments, commented out with a pound (#) character. Comments, blank lines, unknown parameters, or misspelled parameters are ignored by the server. Note A bug in the TPS prevents it from ignoring lines which are commented out in the CS.cfg file. Rather than commenting out lines in the TPS CS.cfg file, simply delete those lines. Parameters that configure the same area of the instance tend to be grouped together into the same block. Some areas of functionality are implemented through plug-ins, such as self-tests, jobs, and authorization to access the subsystem. For those parameters, the plug-in instance has a unique identifier (since there can be multiple instances of even the same plug-in called for a subsystem), the implementation plug-in name, and the Java class. Example 14.1. Subsystem Authorization Settings Note The values for configuration parameters must be properly formatted, so they must obey two rules: The values that need to be localized must be in UTF8 characters. The CS.cfg file supports forward slashes (/) in parameter values. If a back slash (\) is required in a value, it must be escaped with a back slash, meaning that two back slashes in a row must be used. The following sections are snapshots of CS.cfg file settings and parameters. These are not exhaustive references or examples of CS.cfg file parameters. Also, the parameters available and used in each subsystem configuration file is very different, although there are similarities. 14.2.3.1. Basic Subsystem Settings Basic settings are specific to the instance itself, without directly relating to the functionality or behavior of the subsystem. This includes settings for the instance name, root directory, the user ID for the process, and port numbers. Many of the settings assigned when the instance is first installed or configured are prefaced with pkispawn . Example 14.2. Basic Instance Parameters for the CA: pkispawn file ca.cfg Important While information like the port settings is included in the CS.cfg file, it is not set in the CS.cfg . The server configuration is set in the server.xml file. The ports in CS.cfg and server.xml must match for a working RHCS instance. 14.2.3.2. Logging Settings There are several different types of logs that can be configured, depending on the type of subsystem. Each type of log has its own configuration entry in the CS.cfg file. For more information about Logging Settings, see Section 18.1, "Certificate System Log Settings" . 14.2.3.3. Authentication and Authorization Settings The CS.cfg file sets how users are identified to access a subsystem instance (authentication) and what actions are approved (authorization) for each authenticated user. A CS subsystem uses authentication plug-ins to define the method for logging into the subsystem. The following example shows an authentication instance named SharedToken that instantiates a JAVA plug-in named SharedSecret . For some authorization settings, it is possible to select an authorization method that uses an LDAP database to store user entries, in which case the database settings are configured along with the plug-in as shown below. For more information on securely configuring LDAP and an explanation of parameters, refer to Section 7.10.3, "Enabling TLS Client Authentication" . The parameters paths differ than what is shown there, but the same names and values are allowed in both places. The CA also has to have a mechanism for approving user requests. As with configuring authorization, this is done by identifying the appropriate authentication plug-in and configuring an instance for it: 14.2.3.4. Subsystem Certificate Settings Several of the subsystems have entries for each subsystem certificate in the configuration file. 14.2.3.5. Settings for Required Subsystems At a minimum, each subsystem depends on a CA, which means that the CA (and any other required subsystem) has to be configured in the subsystem's settings. Any connection to another subsystem is prefaced by conn. and then the subsystem type and number. 14.2.3.6. Database Settings All of the subsystems use an LDAP directory to store their information. This internal database is configured in the internaldb parameters, except for the TPS which configured it in the tokendb parameters with a lot of other configuration settings. For further information on securely configuring LDAP and an explanation of parameters, refer to Section 7.10.3, "Enabling TLS Client Authentication" . No additional configuration is necessary outside of what is done as part Section 7.10.3, "Enabling TLS Client Authentication" . 14.2.3.7. Enabling and Configuring a Publishing Queue Part of the enrollment process includes publishing the issued certificate to any directories or files. This, essentially, closes out the initial certificate request. However, publishing a certificate to an external network can significantly slow down the issuance process - which leaves the request open. To avoid this situation, administrators can enable a publishing queue . The publishing queue separates the publishing operation (which may involve an external LDAP directory) from the request and enrollment operations, which uses a separate request queue. The request queue is updated immediately to show that the enrollment process is complete, while the publishing queue sends the information at the pace of the network traffic. The publishing queue sets a defined, limited number of threads that publish generated certificates, rather than opening a new thread for each approved certificate. The publishing queue is disabled by default. It can be enabled in the CA Console, along with enabling publishing. Note While the publishing queue is disabled by default, the queue is automatically enabled if LDAP publishing is enabled in the Console . Otherwise, the queue can be enabled manually. Figure 14.1. Enabling the Publishing Queue 14.2.3.7.1. Enabling and Configuring a Publishing Queue by editing the CS.cfg file Enabling the publishing queue by editing the CS.cfg file allows administrators to set other options for publishing, like the number of threads to use for publishing operations and the queue page size. Stop the CA server, so that you can edit the configuration files. Open the CA's CS.cfg file. Set the ca.publish.queue.enable to true. If the parameter is not present, then add a line with the parameter. Set other related publishing queue parameters: ca.publish.queue.maxNumberOfThreads sets the maximum number of threads that can be opened for publishing operations. The default is 3. ca.publish.queue.priorityLevel sets the priority for publishing operations. The priority value ranges from -2 (lowest priority) to 2 (highest priority). Zero (0) is normal priority and is also the default. ca.publish.queue.pageSize sets the maximum number of requests that can be stored in the publishing queue page. The default is 40. ca.publish.queue.saveStatus sets the interval to save its status every specified number of publishing operations. This allows the publishing queue to be recovered if the CA is restarted or crashes. The default is 200, but any non-zero number will recover the queue when the CA restarts. Setting this parameter to 0 disables queue recovery. Note Setting ca.publish.queue.enable to false and ca.publish.queue.maxNumberOfThreads to 0 disables both the publishing queue and using separate threads for publishing issued certificates. Restart the CA server. 14.2.3.8. Settings for PKI Tasks The CS.cfg file is used to configure the PKI tasks for every subsystem. The parameters are different for every single subsystem, without any overlap. For example, the KRA has settings for a required number of agents to recover a key. Review the CS.cfg file for each subsystem to become familiar with its PKI task settings; the comments in the file are a decent guide for learning what the different parameters are. The CA configuration file lists all of the certificate profiles and policy settings, as well as rules for generating CRLs. The TPS configures different token operations. The TKS lists profiles for deriving keys from different key types. The OCSP sets key information for different key sets. 14.2.3.9. Changing DN Attributes in CA-Issued Certificates In certificates issued by the Certificate System, DNs identify the entity that owns the certificate. In all cases, if the Certificate System is connected with a Directory Server, the format of the DNs in the certificates should match the format of the DNs in the directory. It is not necessary that the names match exactly; certificate mapping allows the subject DN in a certificate to be different from the one in the directory. In the Certificate System, the DN is based on the components, or attributes, defined in the X.509 standard. Table 14.8, "Allowed Characters for Value Types" lists the attributes supported by default. The set of attributes is extensible. Table 14.8. Allowed Characters for Value Types Attribute Value Type Object Identifier cn DirectoryString 2.5.4.3 ou DirectoryString 2.5.4.11 o DirectoryString 2.5.4.10 c PrintableString , two-character 2.5.4.6 l DirectoryString 2.5.4.7 st DirectoryString 2.5.4.8 street DirectoryString 2.5.4.9 title DirectoryString 2.5.4.12 uid DirectoryString 0.9.2342.19200300.100.1.1 mail IA5String 1.2.840.113549.1.9.1 dc IA5String 0.9.2342.19200300.100.1.2.25 serialnumber PrintableString 2.5.4.5 unstructuredname IA5String 1.2.840.113549.1.9.2 unstructuredaddress PrintableString 1.2.840.113549.1.9.8 By default, the Certificate System supports the attributes identified in Table 14.8, "Allowed Characters for Value Types" . This list of supported attributes can be extended by creating or adding new attributes. The syntax for adding additional X.500Name attributes, or components, is as follows: The value converter class converts a string to an ASN.1 value; this class must implement the netscape.security.x509.AVAValueConverter interface. The string-to-value converter class can be one of the following: netscape.security.x509.PrintableConverter converts a string to a PrintableString value. The string must have only printable characters. netscape.security.x509.IA5StringConverter converts a string to an IA5String value. The string must have only IA5String characters. netscape.security.x509.DirStrConverter converts a string to a DirectoryString . The string is expected to be in DirectoryString format according to RFC 2253. netscape.security.x509.GenericValueConverter converts a string character by character in the following order, from the smallest characterset to the largest: PrintableString IA5String BMPString Universal String An attribute entry looks like the following: 14.2.3.9.1. Adding New or Custom Attributes To add a new or proprietary attribute to the Certificate System schema, do the following: Stop the Certificate Manager. Open the /var/lib/pki/ cs_instance /conf/ directory. Open the configuration file, CS.cfg . Add the new attributes to the configuration file. For example, to add three proprietary attributes, MYATTR1 that is a DirectoryString , MYATTR2 that is an IA5String , and MYATTR3 that is a PrintableString , add the following lines at the end of the configuration file: Save the changes, and close the file. Restart the Certificate Manager. Reload the enrollment page and verify the changes; the new attributes should show up in the form. To verify that the new attributes are in effect, request a certificate using the manual enrollment form. Enter values for the new attributes so that it can be verified that they appear in the certificate subject names. For example, enter the following values for the new attributes and look for them in the subject name: Open the agent services page, and approve the request. When the certificate is issued, check the subject name. The certificate should show the new attribute values in the subject name. 14.2.3.9.2. Changing the DER-Encoding Order It is possible to change the DER-encoding order of a DirectoryString , so that the string is configurable since different clients support different encodings. The syntax for changing the DER-encoding order of a DirectoryString is as follows: The possible encoding values are as follows: PrintableString IA5String UniversalString BMPString UTF8String For example, the DER-encoding ordered can be listed as follows: To change the DirectoryString encoding, do the following: Stop the Certificate Manager. Open the /var/lib/pki/ cs_instance /conf/ directory. Open the CS.cfg configuration file. Add the encoding order to the configuration file. For example, to specify two encoding values, PrintableString and UniversalString , and the encoding order is PrintableString first and UniversalString , add the following line at the end of the configuration file: Save the changes, and close the file. Start the Certificate Manager. To verify that the encoding orders are in effect, enroll for a certificate using the manual enrollment form. Use John_Doe for the cn . Open the agent services page, and approve the request. When the certificate is issued, use the dumpasn1 tool to examine the encoding of the certificate. The cn component of the subject name should be encoded as a UniversalString . Create and submit a new request using John Smith for the cn . The cn component of the subject name should be encoded as a PrintableString . 14.2.3.10. Setting a CA to Use a Different Certificate to Sign CRLs A Certificate Manager uses the key pair corresponding to its OCSP signing certificate for signing certificates and certificate revocation lists (CRLs). To use a different key pair to sign the CRLs that the Certificate Manager generates, then a CRL signing certificate can be created. The Certificate Manager's CRL signing certificate must be signed or issued by itself. To enable a Certificate Manager to sign CRLs with a different key pair, do the following: Request a CRL signing certificate for the Certificate Manager. Alternatively, use a tool that is capable of generating keypairs, such as the certutil tool to generate a key pair, request a certificate for the key pair, and install the certificate in the Certificate Manager's certificate database. For more information about the certutil tool, see http://www.mozilla.org/projects/security/pki/nss/tools/ . When the certificate request has been created, submit it through the Certificate Manager end-entities page, selecting the right profile, such as the "Manual OCSP Manager Signing Certificate Enrollment" profile. The page has a URL in the following format: After the request is submitted, log into the agent services page. Check the request for required extensions. The CRL signing certificate must contain the Key Usage extension with the crlSigning bit set. Approve the request. After the CRL signing certificate is generated, install the certificate in the Certificate Manager's database through System Keys and Certificates in the console. Stop the Certificate Manager. Update the Certificate Manager's configuration to recognize the new key pair and certificate. Change to the Certificate Manager instance configuration directory. Open the CS.cfg file and add the following lines: nickname is the name assigned to the CRL signing certificate. instance_ID is the name of the Certificate Manager instance. If the installed CA is a RSA-based CA, signing_algorithm can be SHA256withRSA , SHA384withRSA , or SHA512withRSA . If the installed CA is an EC-based CA, signing_algorithm can be SHA256withEC , SHA384withEC , SHA512withEC . token_name is the name of the token used for generating the key pair and the certificate. If the internal/software token is used, use Internal Key Storage Token as the value. For example, the entries might look like this: Save the changes, and close the file. Restart the Certificate Manager. Now the Certificate Manager is ready to use the CRL signing certificate to sign the CRLs it generates. 14.2.3.11. Configuring CRL Generation from Cache in CS.cfg The CRL cache is a simple mechanism that allows cert revocation information to be taken from a collection of revocation information maintained in memory. For best performance, it is recommended that this feature be enabled, which already represents the default behavior. The following configuration information (which is the default) is presented for information purposes or if changes are desired. Stop the CA server. Open the CA configuration directory. Edit the CS.cfg file, setting the enableCRLCache and enableCacheRecovery parameters to true: Start the CA server. 14.2.3.12. Configuring Update Intervals for CRLs in CS.cfg The following describes how to configure the CRL system flexibly to reflect desired behavior. The goal is to configure CRL updates according to some schedule of two types. One type allows for a list of explicit times and the other consists of a length of time interval between updates. There is also a hybrid scenario where both are enabled to account for drift. The Note entry just below actually represents the default out of the box scenario. The default scenario is listed as follows: Deviate from this only when a more detailed and specific update schedule is desired. The rest of the section will talk about how that is accomplished. Configuring the settings for full and delta CRLs in the CS.cfg file involves editing parameters. Table 14.9. CRL Extended Interval Parameters Parameter Description Accepted Values updateSchema Sets the ratio for how many delta CRLs are generated per full CRL An integer value enableDailyUpdates Enables and disables setting CRL updates based on set times true or false enableUpdateInterval Enables and disables setting CRL updates based on set intervals true or false dailyUpdates Sets the times the CRLs should be updated A comma-delimited list of times autoUpdateInterval Sets the interval in minutes to update the CRLs An integer value autoUpdateInterval.effectiveAtStart Allows the system to attempt to use the new value of auto update immediately instead of waiting for the currently scheduled nextUpdate time true or false nextUpdateGracePeriod Adds the time in minutes to the CRL validity period to ensure that CRLs remain valid throughout the publishing or replication period An integer value refreshInSec Sets the periodicity in seconds of the thread on the clone OCSP to check LDAP for any updates of the CRL An integer value Important The autoUpdateInterval.effectiveAtStart parameter requires a system restart in order for a new value to apply. The default value of this parameter is false, it should only be changed by users who are sure of what they are doing. Procedure 14.1. How to configure CRL update intervals in CS.cfg Stop the CA server. Change to the CA configuration directory. Edit the CS.cfg file, and add the following line to set the update interval: The default interval is 1, meaning a full CRL is generated every time a CRL is generated. The updateSchema interval can be set to any integer. Set the update frequency, either by specifying a cyclical interval or set times for the updates to occur: Specify set times by enabling the enableDailyUpdates parameter, and add the desired times to the dailyUpdates parameter: This field sets a daily time when the CRL should be updated. To specify multiple times, enter a comma-separated list of times, such as 01:50,04:55,06:55 . To enter a schedule for multiple days, enter a comma-separated list to set the times within the same day, and then a semicolon separated list to identify times for different days. For example, set 01:50,04:55,06:55;02:00,05:00,17:00 to configure revocation on Day 1 of the cycle at 1:50am, 4:55am, and 6:55am and then Day 2 at 2am, 5am, and 5pm. Specify intervals by enabling the enableUpdateInterval parameter, and add the required interval in minutes to the autoUpdateInterval parameter: Set the following parameters depending on your environment: If you run a CA without an OCSP subsystem, set: If you run a CA with an OCSP subsystem, set: The ca.crl.MasterCRL.nextUpdateGracePeriod parameter defines the time in minutes, and the value must be big enough to enable the CA to propagate the new CRL to the OCSP. You must set the parameter to a non-zero value. If you additionally have OCSP clones in your environment, also set: The ocsp.store.defStore.refreshInSec parameter sets the frequency in seconds with which the clone OCSP instances are informed of CRL updates through LDAP replication updates from the master OCSP instance. See Table 14.9, "CRL Extended Interval Parameters" for details on the parameters. Restart the CA server. Note Schedule drift can occur when updating CRLs by interval. Typically, drift occurs as a result of manual updates and CA restarts. To prevent schedule drift, set both enableDailyUpdates and enableUpdateInterval parameters to true, and add the required values to autoUpdateInterval and dailyUpdates : Only one dailyUpdates value will be accepted when updating CRLs by interval. The interval updates will resynchronize with the dailyUpdates value every 24 hours preventing schedule drift. 14.2.3.13. Changing the Access Control Settings for the Subsystem By default, access control rules are applied by evaluating deny rules first and then by evaluating allow rules. To change the order, change the authz.evaluateOrder parameter in the CS.cfg . Additionally, access control rules can be evaluated from the local web.xml file (basic ACLs) or more complex ACLs can be accessed by checking the LDAP database. The authz.sourceType parameter identifies what type of authorization to use. Note Always restart the subsystem after editing the CS.cfg file to load the updated settings. 14.2.3.14. Configuring Ranges for Requests and Serial Numbers When random serial numbers are not used, in case of cloned systems, administrators could specify the ranges Certificate System will use for requests and serial numbers in the /etc/pki/ instance_name / subsystem /CS.cfg file: Note Certificate System supports BigInteger values for the ranges. 14.2.3.15. Setting Requirement for pkiconsole to use TLS Client Certificate Authentication Note pkiconsole is being deprecated. Edit the CS.cfg file of each subsystem, search for the authType parameter and set it as follows:
|
[
"/var/lib/pki/ instance_name / subsystem_type /conf",
"/var/lib/pki/ instance_name /ca/conf",
"pki-server stop instance_name",
"systemctl stop pki-tomcatd-nuxwdog@ instance_name .service",
"pki-server start instance_name",
"systemctl start pki-tomcatd-nuxwdog@ instance_name .service",
"#comment parameter=value",
"authz.impl._000=## authz.impl._001=## authorization manager implementations authz.impl._002=## authz.impl.BasicAclAuthz.class=com.netscape.cms.authorization.BasicAclAuthz authz.instance.BasicAclAuthz.pluginName=BasicAclAuthz",
"[DEFAULT] pki_admin_password=Secret.123 pki_client_pkcs12_password=Secret.123 pki_ds_password=Secret.123 Optionally keep client databases pki_client_database_purge=False Separated CA instance name and ports pki_instance_name=pki-ca pki_http_port=18080 pki_https_port=18443 This Separated CA instance will be its own security domain pki_security_domain_https_port=18443 Separated CA Tomcat ports pki_ajp_port=18009 pki_tomcat_server_port=18005",
"auths.impl.SharedToken.class=com.netscape.cms.authentication.SharedSecret auths.instance.SharedToken.pluginName=SharedToken auths.instance.SharedToken.dnpattern= auths.instance.SharedToken.ldap.basedn=ou=People,dc=example,dc=org auths.instance.SharedToken.ldap.ldapauth.authtype=BasicAuth auths.instance.SharedToken.ldap.ldapauth.bindDN=cn=Directory Manager auths.instance.SharedToken.ldap.ldapauth.bindPWPrompt=Rule SharedToken auths.instance.SharedToken.ldap.ldapauth.clientCertNickname= auths.instance.SharedToken.ldap.ldapconn.host=server.example.com auths.instance.SharedToken.ldap.ldapconn.port=636 auths.instance.SharedToken.ldap.ldapconn.secureConn=true auths.instance.SharedToken.ldap.ldapconn.version=3 auths.instance.SharedToken.ldap.maxConns= auths.instance.SharedToken.ldap.minConns= auths.instance.SharedToken.ldapByteAttributes= auths.instance.SharedToken.ldapStringAttributes= auths.instance.SharedToken.shrTokAttr=shrTok",
"authz.impl.DirAclAuthz.class=com.netscape.cms.authorization.DirAclAuthz authz.instance.DirAclAuthz.ldap=internaldb authz.instance.DirAclAuthz.pluginName=DirAclAuthz authz.instance.DirAclAuthz.ldap._000=## authz.instance.DirAclAuthz.ldap._001=## Internal Database authz.instance.DirAclAuthz.ldap._002=## authz.instance.DirAclAuthz.ldap.basedn=dc=server.example.com-pki-ca authz.instance.DirAclAuthz.ldap.database=server.example.com-pki-ca authz.instance.DirAclAuthz.ldap.maxConns=15 authz.instance.DirAclAuthz.ldap.minConns=3 authz.instance.DirAclAuthz.ldap.ldapauth.authtype=SslClientAuth authz.instance.DirAclAuthz.ldap.ldapauth.bindDN=cn=Directory Manager authz.instance.DirAclAuthz.ldap.ldapauth.bindPWPrompt=Internal LDAP Database authz.instance.DirAclAuthz.ldap.ldapauth.clientCertNickname= authz.instance.DirAclAuthz.ldap.ldapconn.host=localhost authz.instance.DirAclAuthz.ldap.ldapconn.port=11636 authz.instance.DirAclAuthz.ldap.ldapconn.secureConn=true authz.instance.DirAclAuthz.ldap.multipleSuffix.enable=false",
"auths.impl.AgentCertAuth.class=com.netscape.cms.authentication.AgentCertAuthentication auths.instance.AgentCertAuth.agentGroup=Certificate Manager Agents auths.instance.AgentCertAuth.pluginName=AgentCertAuth",
"ca.sslserver.cert=MIIDmDCCAoCgAwIBAgIBAzANBgkqhkiG9w0BAQUFADBAMR4wHAYDVQQKExVSZWR ca.sslserver.certreq=MIICizCCAXMCAQAwRjEeMBwGA1UEChMVUmVkYnVkY29tcHV0ZXIgRG9tYWluMSQwIgYDV ca.sslserver.nickname=Server-Cert cert-pki-ca ca.sslserver.tokenname=Internal Key Storage Token",
"conn.ca1.clientNickname=subsystemCert cert-pki-tps conn.ca1.hostadminport=server.example.com:8443 conn.ca1.hostagentport=server.example.com:8443 conn.ca1.hostport=server.example.com:9443 conn.ca1.keepAlive=true conn.ca1.retryConnect=3 conn.ca1.servlet.enrollment=/ca/ee/ca/profileSubmitSSLClient conn.ca1.servlet.renewal=/ca/ee/ca/profileSubmitSSLClient conn.ca1.servlet.revoke=/ca/subsystem/ca/doRevoke conn.ca1.servlet.unrevoke=/ca/subsystem/ca/doUnrevoke conn.ca1.timeout=100",
"internaldb._000=## internaldb._000=## internaldb._001=## Internal Database internaldb._002=## internaldb.basedn=o=pki-tomcat-ca-SD internaldb.database=pki-tomcat-ca internaldb.maxConns=15 internaldb.minConns=3 internaldb.ldapauth.authtype=SslClientAuth internaldb.ldapauth.clientCertNickname=HSM-A:subsystemCert pki-tomcat-ca internaldb.ldapconn.host=example.com internaldb.ldapconn.port=11636 internaldb.ldapconn.secureConn=true internaldb.multipleSuffix.enable=false",
"systemctl stop pki-tomcatd-nuxwdog@ instance_name .service",
"vim /var/lib/pki/ instance_name /ca/conf/CS.cfg",
"ca.publish.queue.enable=true",
"ca.publish.queue.maxNumberOfThreads=1 ca.publish.queue.priorityLevel=0 ca.publish.queue.pageSize=100 ca.publish.queue.saveStatus=200",
"systemctl start pki-tomcatd-nuxwdog@ instance_name .service",
"kra.noOfRequiredRecoveryAgents=1",
"X500Name. NEW_ATTRNAME .oid= n.n.n.n X500Name. NEW_ATTRNAME .class= string_to_DER_value_converter_class",
"X500Name.MY_ATTR.oid=1.2.3.4.5.6 X500Name.MY_ATTR.class=netscape.security.x509.DirStrConverter",
"systemctl stop pki-tomcatd-nuxwdog@ instance_name .service",
"X500Name.attr.MYATTR1.oid=1.2.3.4.5.6 X500Name.attr.MYATTR1.class=netscape.security.x509.DirStrConverter X500Name.attr.MYATTR2.oid=11.22.33.44.55.66 X500Name.attr.MYATTR2.class=netscape.security.x509.IA5StringConverter X500Name.attr.MYATTR3.oid=111.222.333.444.555.666 X500Name.attr.MYATTR3.class=netscape.security.x509.PrintableConverter",
"systemctl start pki-tomcatd-nuxwdog@ instance_name .service",
"MYATTR1: a_value MYATTR2: a.Value MYATTR3: aValue cn: John Doe o: Example Corporation",
"X500Name.directoryStringEncodingOrder= encoding_list_separated_by_commas",
"X500Name.directoryStringEncodingOrder=PrintableString,BMPString",
"systemctl stop pki-tomcatd-nuxwdog@ instance_name .service",
"X500Name.directoryStringEncodingOrder=PrintableString,UniversalString",
"systemctl start pki-tomcatd-nuxwdog@ instance_name .service",
"https:// hostname:port /ca/ee/ca",
"pki-server stop instance_name",
"cd /var/lib/pki/ instance-name /ca/conf/",
"ca.crl_signing.cacertnickname= nickname cert- instance_ID ca.crl_signing.defaultSigningAlgorithm= signing_algorithm ca.crl_signing.tokenname= token_name",
"ca.crl_signing.cacertnickname=crlSigningCert cert-pki-ca ca.crl_signing.defaultSigningAlgorithm=SHAMD512withRSA ca.crl_signing.tokenname=Internal Key Storage Token",
"pki-server restart instance_name",
"systemctl stop pki-tomcatd-nuxwdog@ instance_name .service",
"cd /var/lib/ instance_name /conf/",
"ca.crl.MasterCRL.enableCRLCache=true ca.crl.MasterCRL.enableCacheRecovery=true",
"systemctl start pki-tomcatd-nuxwdog@ instance_name .service",
"ca.crl.MasterCRL.updateSchema=3 ca.crl.MasterCRL.enableDailyUpdates=true ca.crl.MasterCRL.enableUpdateInterval=true ca.crl.MasterCRL.autoUpdateInterval=240 ca.crl.MasterCRL.dailyUpdates=1:00 ca.crl.MasterCRL.nextUpdateGracePeriod=0",
"systemctl stop pki-tomcatd-nuxwdog@ instance_name .service",
"cd /var/lib/ instance_name /conf/",
"ca.crl.MasterCRL.updateSchema=3",
"ca.crl.MasterCRL.enableDailyUpdates=true ca.crl.MasterCRL.enableUpdateInterval=false ca.crl.MasterCRL.dailyUpdates=0:50,04:55,06:55",
"ca.crl.MasterCRL.enableDailyUpdates=false ca.crl.MasterCRL.enableUpdateInterval=true ca.crl.MasterCRL.autoUpdateInterval=240",
"ca.crl.MasterCRL.nextUpdateGracePeriod=0",
"ca.crl.MasterCRL.nextUpdateGracePeriod= time_in_minutes",
"ocsp.store.defStore.refreshInSec= time_in_seconds",
"systemctl start pki-tomcatd-nuxwdog@ instance_name .service",
"ca.crl.MasterCRL.enableDailyUpdates=true ca.crl.MasterCRL.enableUpdateInterval=true ca.crl.MasterCRL.autoUpdateInterval=240 ca.crl.MasterCRL.dailyUpdates=1:00",
"authz.evaluateOrder=deny,allow",
"authz.sourceType=web.xml",
"dbs.beginRequestNumber= 1001001007001 dbs.endRequestNumber= 11001001007000 dbs.requestIncrement= 10000000000000 dbs.requestLowWaterMark= 2000000000000 dbs.requestCloneTransferNumber= 10000 dbs.requestDN=ou=ca, ou=requests dbs.requestRangeDN= ou=requests , ou= ranges dbs.beginSerialNumber= 1001001007001 dbs.endSerialNumber= 11001001007000 dbs.serialIncrement= 10000000000000 dbs.serialLowWaterMark= 2000000000000 dbs.serialCloneTransferNumber= 10000 dbs.serialDN= ou=certificateRepository , ou= ca dbs.serialRangeDN= ou=certificateRepository, ou=ranges dbs.beginReplicaNumber= 1 dbs.endReplicaNumber= 100 dbs.replicaIncrement= 100 dbs.replicaLowWaterMark= 20 dbs.replicaCloneTransferNumber= 5 dbs.replicaDN= ou=replica dbs.replicaRangeDN= ou=replica, ou=ranges dbs.ldap= internaldb dbs.newSchemaEntryAdded=true",
"authType=sslclientauth"
] |
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/configuration_files
|
Tooling Guide for Red Hat Build of Apache Camel
|
Tooling Guide for Red Hat Build of Apache Camel Red Hat build of Apache Camel 4.8 Tooling Guide provided by Red Hat
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/tooling_guide_for_red_hat_build_of_apache_camel/index
|
12.11. Turning Schema Checking On and Off
|
12.11. Turning Schema Checking On and Off When schema checking is on, the Directory Server ensures three things: The object classes and attributes using are defined in the directory schema. The attributes required for an object class are contained in the entry. Only attributes allowed by the object class are contained in the entry. Important Red Hat recommends not to disable the schema checking. Schema checking is turned on by default in the Directory Server, and the Directory Server should always run with schema checking turned on. The only situation where is may be beneficial to turn schema checking off is to accelerate LDAP import operations. However, there is a risk of importing entries that do not conform to the schema. Consequently, it is impossible to update these entries. 12.11.1. Turning Schema Checking On and Off Using the Command Line To turn schema checking on and off, set the value of the nsslapd-schemacheck parameter. For example to disable schema checking: For details about the nsslapd-schemacheck parameter, see the description of the parameter in the Red Hat Directory Server Configuration, Command, and File Reference . 12.11.2. Turning Schema Checking On and Off Using the Web Console To enable or disable schema checking using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Server Settings , and select the Server Settings entry. Open the Advanced Settings tab. To enable schema checking, select the Enable Schema Checking check box. To disable the feature, clear the check box. Click Save .
|
[
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-schemacheck=off Successfully replaced \"nsslapd-schemacheck\""
] |
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/turning_schema_checking_on_and_off
|
2.6. File System Backups
|
2.6. File System Backups It is important to make regular backups of your GFS2 file system in case of emergency, regardless of the size of your file system. Many system administrators feel safe because they are protected by RAID, multipath, mirroring, snapshots, and other forms of redundancy, but there is no such thing as safe enough. It can be a problem to create a backup since the process of backing up a node or set of nodes usually involves reading the entire file system in sequence. If this is done from a single node, that node will retain all the information in cache until other nodes in the cluster start requesting locks. Running this type of backup program while the cluster is in operation will negatively impact performance. Dropping the caches once the backup is complete reduces the time required by other nodes to regain ownership of their cluster locks/caches. This is still not ideal, however, because the other nodes will have stopped caching the data that they were caching before the backup process began. You can drop caches using the following command after the backup is complete: It is faster if each node in the cluster backs up its own files so that the task is split between the nodes. You might be able to accomplish this with a script that uses the rsync command on node-specific directories. Red Hat recommends making a GFS2 backup by creating a hardware snapshot on the SAN, presenting the snapshot to another system, and backing it up there. The backup system should mount the snapshot with -o lockproto=lock_nolock since it will not be in a cluster.
|
[
"echo -n 3 > /proc/sys/vm/drop_caches"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/global_file_system_2/s1-backups-gfs2
|
Chapter 28. Network Driver Updates
|
Chapter 28. Network Driver Updates The bna driver has been upgraded to version 3.2.23.0r. The cxgb3 driver has been upgraded to version 1.1.5-ko. The cxgb3i driver has been upgraded to version 2.0.0. The iw_cxgb3 driver has been upgraded to version 1.1. The cxgb4 driver has been upgraded to version 2.0.0-ko. The cxgb4vf driver has been upgraded to version 2.0.0-ko. The cxgb4i driver has been upgraded to version 0.9.4. The iw_cxgb4 driver has been upgraded to version 0.1. The e1000e driver has been upgraded to version 2.3.2-k. The igb driver has been upgraded to version 5.2.13-k. The igbvf driver has been upgraded to version 2.0.2-k. The ixgbe driver has been upgraded to version 3.19.1-k. The ixgbevf driver has been upgraded to version 2.12.1-k. The i40e driver has been upgraded to version 1.0.11-k. The i40evf driver has been upgraded to version 1.0.1. The e1000 driver has been upgraded to version 7.3.21-k8-NAPI. The mlx4_en driver has been upgraded to version 2.2-1. The mlx4_ib driver has been upgraded to version 2.2-1. The mlx5_core driver has been upgraded to version 2.2-1. The mlx5_ib driver has been upgraded to version 2.2-1. The ocrdma driver has been upgraded to version 10.2.287.0u. The ib_ipoib driver has been upgraded to version 1.0.0. The ib_qib driver has been upgraded to version 1.11. The enic driver has been upgraded to version 2.1.1.67. The be2net driver has been upgraded to version 10.4r. The tg3 driver has been upgraded to version 3.137. The r8169 driver has been upgraded to version 2.3LK-NAPI.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.1_release_notes/ch28
|
Chapter 5. Installing a three-node cluster on AWS
|
Chapter 5. Installing a three-node cluster on AWS In OpenShift Container Platform version 4.16, you can install a three-node cluster on Amazon Web Services (AWS). A three-node cluster consists of three control plane machines, which also act as compute machines. This type of cluster provides a smaller, more resource efficient cluster, for cluster administrators and developers to use for testing, development, and production. You can install a three-node cluster using either installer-provisioned or user-provisioned infrastructure. Note Deploying a three-node cluster using an AWS Marketplace image is not supported. 5.1. Configuring a three-node cluster You configure a three-node cluster by setting the number of worker nodes to 0 in the install-config.yaml file before deploying the cluster. Setting the number of worker nodes to 0 ensures that the control plane machines are schedulable. This allows application workloads to be scheduled to run from the control plane nodes. Note Because application workloads run from control plane nodes, additional subscriptions are required, as the control plane nodes are considered to be compute nodes. Prerequisites You have an existing install-config.yaml file. Procedure Set the number of compute replicas to 0 in your install-config.yaml file, as shown in the following compute stanza: Example install-config.yaml file for a three-node cluster apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0 # ... If you are deploying a cluster with user-provisioned infrastructure: After you create the Kubernetes manifest files, make sure that the spec.mastersSchedulable parameter is set to true in cluster-scheduler-02-config.yml file. You can locate this file in <installation_directory>/manifests . For more information, see "Creating the Kubernetes manifest and Ignition config files" in "Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates". Do not create additional worker nodes. Example cluster-scheduler-02-config.yml file for a three-node cluster apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: "" status: {} 5.2. steps Installing a cluster on AWS with customizations Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates
|
[
"apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: \"\" status: {}"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_aws/installing-aws-three-node
|
Chapter 12. Azure ServiceBus
|
Chapter 12. Azure ServiceBus Since Camel 3.12 Both producer and consumer are supported The azure-servicebus component that integrates Azure ServiceBus . Azure ServiceBus is a fully managed enterprise integration message broker. Service Bus can decouple applications and services. Service Bus offers a reliable and secure platform for asynchronous transfer of data and state. Data is transferred between different applications and services using messages. Prerequisites You must have a valid Windows Azure Storage account. More information is available at Azure Documentation Portal . 12.1. Dependencies When using azure-servicebus with Red Hat build of Camel Spring Boot, add the following Maven dependency to your pom.xml to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-azure-servicebus-starter</artifactId> </dependency> 12.2. Configuring Options Camel components are configured on two levels: Component level Endpoint level 12.2.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 12.2.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 12.3. Component Options The Azure ServiceBus component supports 25 options, which are listed below. Name Description Default Type amqpRetryOptions (common) Sets the retry options for Service Bus clients. If not specified, the default retry options are used. AmqpRetryOptions amqpTransportType (common) Sets the transport type by which all the communication with Azure Service Bus occurs. Default value is AmqpTransportType#AMQP. Enum values: Amqp AmqpWebSockets AMQP AmqpTransportType clientOptions (common) Sets the ClientOptions to be sent from the client built from this builder, enabling customization of certain properties, as well as support the addition of custom header information. Refer to the ClientOptions documentation for more information. ClientOptions configuration (common) The component configurations. ServiceBusConfiguration proxyOptions (common) Sets the proxy configuration to use for ServiceBusSenderAsyncClient. When a proxy is configured, AmqpTransportType#AMQP_WEB_SOCKETS must be used for the transport type. ProxyOptions serviceBusType (common) Required The service bus type of connection to execute. Queue is for typical queue option and topic for subscription based model. Enum values: queue topic queue ServiceBusType bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean consumerOperation (consumer) Sets the desired operation to be used in the consumer. Enum values: receiveMessages peekMessages receiveMessages ServiceBusConsumerOperationDefinition disableAutoComplete (consumer) Disables auto-complete and auto-abandon of received messages. By default, a successfully processed message is \\{link ServiceBusReceiverAsyncClient#complete(ServiceBusReceivedMessage) completed}. If an error happens when the message is processed, it is \\{link ServiceBusReceiverAsyncClient#abandon(ServiceBusReceivedMessage) abandoned}. false boolean maxAutoLockRenewDuration (consumer) Sets the amount of time to continue auto-renewing the lock. Setting Duration#ZERO or null disables auto-renewal. For \\{link ServiceBusReceiveMode#RECEIVE_AND_DELETE RECEIVE_AND_DELETE} mode, auto-renewal is disabled. 5m Duration peekNumMaxMessages (consumer) Set the max number of messages to be peeked during the peek operation. Integer prefetchCount (consumer) Sets the prefetch count of the receiver. For both \\{link ServiceBusReceiveMode#PEEK_LOCK PEEK_LOCK} and \\{link ServiceBusReceiveMode#RECEIVE_AND_DELETE RECEIVE_AND_DELETE} modes the default value is 1. Prefetch speeds up the message flow by aiming to have a message readily available for local retrieval when and before the application asks for one using ServiceBusReceiverAsyncClient#receiveMessages(). Setting a non-zero value will prefetch that number of messages. Setting the value to zero turns prefetch off. int receiverAsyncClient (consumer) Autowired Sets the receiverAsyncClient in order to consume messages by the consumer. ServiceBusReceiverAsyncClient serviceBusReceiveMode (consumer) Sets the receive mode for the receiver. Enum values: PEEK_LOCK RECEIVE_AND_DELETE PEEK_LOCK ServiceBusReceiveMode subQueue (consumer) Sets the type of the SubQueue to connect to. Enum values: NONE DEAD_LETTER_QUEUE TRANSFER_DEAD_LETTER_QUEUE SubQueue subscriptionName (consumer) Sets the name of the subscription in the topic to listen to. topicOrQueueName and serviceBusType=topic must also be set. This property is required if serviceBusType=topic and the consumer is in use. String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean producerOperation (producer) Sets the desired operation to be used in the producer. Enum values: sendMessages scheduleMessages sendMessages ServiceBusProducerOperationDefinition scheduledEnqueueTime (producer) Sets OffsetDateTime at which the message should appear in the Service Bus queue or topic. OffsetDateTime senderAsyncClient (producer) Autowired Sets SenderAsyncClient to be used in the producer. ServiceBusSenderAsyncClient serviceBusTransactionContext (producer) Represents transaction in service. This object just contains transaction id. ServiceBusTransactionContext autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean connectionString (security) Sets the connection string for a Service Bus namespace or a specific Service Bus resource. String fullyQualifiedNamespace (security) Fully Qualified Namespace of the service bus. String tokenCredential (security) A TokenCredential for Azure AD authentication, implemented in com.azure.identity. TokenCredential 12.4. Endpoint Options The Azure ServiceBus endpoint is configured using URI syntax: with the following path and query parameters: 12.4.1. Path Parameters (1 parameters) Name Description Default Type topicOrQueueName (common) Selected topic name or the queue name, that is depending on serviceBusType config. For example if serviceBusType=queue, then this will be the queue name and if serviceBusType=topic, this will be the topic name. String 12.4.2. Query Parameters (25 parameters) Name Description Default Type amqpRetryOptions (common) Sets the retry options for Service Bus clients. If not specified, the default retry options are used. AmqpRetryOptions amqpTransportType (common) Sets the transport type by which all the communication with Azure Service Bus occurs. Default value is AmqpTransportType#AMQP. Enum values: Amqp AmqpWebSockets AMQP AmqpTransportType clientOptions (common) Sets the ClientOptions to be sent from the client built from this builder, enabling customization of certain properties, as well as support the addition of custom header information. Refer to the ClientOptions documentation for more information. ClientOptions proxyOptions (common) Sets the proxy configuration to use for ServiceBusSenderAsyncClient. When a proxy is configured, AmqpTransportType#AMQP_WEB_SOCKETS must be used for the transport type. ProxyOptions serviceBusType (common) Required The service bus type of connection to execute. Queue is for typical queue option and topic for subscription based model. Enum values: queue topic queue ServiceBusType consumerOperation (consumer) Sets the desired operation to be used in the consumer. Enum values: receiveMessages peekMessages receiveMessages ServiceBusConsumerOperationDefinition disableAutoComplete (consumer) Disables auto-complete and auto-abandon of received messages. By default, a successfully processed message is \\{link ServiceBusReceiverAsyncClient#complete(ServiceBusReceivedMessage) completed}. If an error happens when the message is processed, it is \\{link ServiceBusReceiverAsyncClient#abandon(ServiceBusReceivedMessage) abandoned}. false boolean maxAutoLockRenewDuration (consumer) Sets the amount of time to continue auto-renewing the lock. Setting Duration#ZERO or null disables auto-renewal. For \\{link ServiceBusReceiveMode#RECEIVE_AND_DELETE RECEIVE_AND_DELETE} mode, auto-renewal is disabled. 5m Duration peekNumMaxMessages (consumer) Set the max number of messages to be peeked during the peek operation. Integer prefetchCount (consumer) Sets the prefetch count of the receiver. For both \\{link ServiceBusReceiveMode#PEEK_LOCK PEEK_LOCK} and \\{link ServiceBusReceiveMode#RECEIVE_AND_DELETE RECEIVE_AND_DELETE} modes the default value is 1. Prefetch speeds up the message flow by aiming to have a message readily available for local retrieval when and before the application asks for one using ServiceBusReceiverAsyncClient#receiveMessages(). Setting a non-zero value will prefetch that number of messages. Setting the value to zero turns prefetch off. int receiverAsyncClient (consumer) Autowired Sets the receiverAsyncClient in order to consume messages by the consumer. ServiceBusReceiverAsyncClient serviceBusReceiveMode (consumer) Sets the receive mode for the receiver. Enum values: PEEK_LOCK RECEIVE_AND_DELETE PEEK_LOCK ServiceBusReceiveMode subQueue (consumer) Sets the type of the SubQueue to connect to. Enum values: NONE DEAD_LETTER_QUEUE TRANSFER_DEAD_LETTER_QUEUE SubQueue subscriptionName (consumer) Sets the name of the subscription in the topic to listen to. topicOrQueueName and serviceBusType=topic must also be set. This property is required if serviceBusType=topic and the consumer is in use. String bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern producerOperation (producer) Sets the desired operation to be used in the producer. Enum values: sendMessages scheduleMessages sendMessages ServiceBusProducerOperationDefinition scheduledEnqueueTime (producer) Sets OffsetDateTime at which the message should appear in the Service Bus queue or topic. OffsetDateTime senderAsyncClient (producer) Autowired Sets SenderAsyncClient to be used in the producer. ServiceBusSenderAsyncClient serviceBusTransactionContext (producer) Represents transaction in service. This object just contains transaction id. ServiceBusTransactionContext lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionString (security) Sets the connection string for a Service Bus namespace or a specific Service Bus resource. String fullyQualifiedNamespace (security) Fully Qualified Namespace of the service bus. String tokenCredential (security) A TokenCredential for Azure AD authentication, implemented in com.azure.identity. TokenCredential 12.5. Async Consumer and Producer This component implements the async Consumer and producer. This allows camel route to consume and produce events asynchronously without blocking any threads. 12.6. Message Headers The Azure ServiceBus component supports 25 message header(s), which is/are listed below: Name Description Default Type CamelAzureServiceBusApplicationProperties (common) Constant: APPLICATION_PROPERTIES The application properties (also known as custom properties) on messages sent and received by the producer and consumer, respectively. Map CamelAzureServiceBusContentType (consumer) Constant: CONTENT_TYPE Gets the content type of the message. String CamelAzureServiceBusCorrelationId (consumer) Constant: CORRELATION_ID Gets a correlation identifier. String CamelAzureServiceBusDeadLetterErrorDescription (consumer) Constant: DEAD_LETTER_ERROR_DESCRIPTION Gets the description for a message that has been dead-lettered. String CamelAzureServiceBusDeadLetterReason (consumer) Constant: DEAD_LETTER_REASON Gets the reason a message was dead-lettered. String CamelAzureServiceBusDeadLetterSource (consumer) Constant: DEAD_LETTER_SOURCE Gets the name of the queue or subscription that this message was enqueued on, before it was dead-lettered. String CamelAzureServiceBusDeliveryCount (consumer) Constant: DELIVERY_COUNT Gets the number of the times this message was delivered to clients. long CamelAzureServiceBusEnqueuedSequenceNumber (consumer) Constant: ENQUEUED_SEQUENCE_NUMBER Gets the enqueued sequence number assigned to a message by Service Bus. long CamelAzureServiceBusEnqueuedTime (consumer) Constant: ENQUEUED_TIME Gets the datetime at which this message was enqueued in Azure Service Bus. OffsetDateTime CamelAzureServiceBusExpiresAt (consumer) Constant: EXPIRES_AT Gets the datetime at which this message will expire. OffsetDateTime CamelAzureServiceBusLockToken (consumer) Constant: LOCK_TOKEN Gets the lock token for the current message. String CamelAzureServiceBusLockedUntil (consumer) Constant: LOCKED_UNTIL Gets the datetime at which the lock of this message expires. OffsetDateTime CamelAzureServiceBusMessageId (consumer) Constant: MESSAGE_ID Gets the identifier for the message. String CamelAzureServiceBusPartitionKey (consumer) Constant: PARTITION_KEY Gets the partition key for sending a message to a partitioned entity. String CamelAzureServiceBusRawAmqpMessage (consumer) Constant: RAW_AMQP_MESSAGE The representation of message as defined by AMQP protocol. AmqpAnnotatedMessage CamelAzureServiceBusReplyTo (consumer) Constant: REPLY_TO Gets the address of an entity to send replies to. String CamelAzureServiceBusReplyToSessionId (consumer) Constant: REPLY_TO_SESSION_ID Gets or sets a session identifier augmenting the ReplyTo address. String CamelAzureServiceBusSequenceNumber (consumer) Constant: SEQUENCE_NUMBER Gets the unique number assigned to a message by Service Bus. long CamelAzureServiceBusSessionId (consumer) Constant: SESSION_ID Gets the session id of the message. String CamelAzureServiceBusSubject (consumer) Constant: SUBJECT Gets the subject for the message. String CamelAzureServiceBusTimeToLive (consumer) Constant: TIME_TO_LIVE Gets the duration before this message expires. Duration CamelAzureServiceBusTo (consumer) Constant: TO Gets the to address. String CamelAzureServiceBusScheduledEnqueueTime (common) Constant: SCHEDULED_ENQUEUE_TIME (producer)Overrides the OffsetDateTime at which the message should appear in the Service Bus queue or topic. (consumer) Gets the scheduled enqueue time of this message. OffsetDateTime CamelAzureServiceBusServiceBusTransactionContext (producer) Constant: SERVICE_BUS_TRANSACTION_CONTEXT Overrides the transaction in service. This object just contains transaction id. ServiceBusTransactionContext CamelAzureServiceBusProducerOperation (producer) Constant: PRODUCER_OPERATION Overrides the desired operation to be used in the producer. Enum values: sendMessages scheduleMessages ServiceBusProducerOperationDefinition 12.6.1. Message Body In the producer, this component accepts message body of String type or List<String> to send batch messages. In the consumer, the returned message body will be of type `String. 12.6.2. Azure ServiceBus Producer operations Operation Description sendMessages Sends a set of messages to a Service Bus queue or topic using a batched approach. scheduleMessages Sends a scheduled message to the Azure Service Bus entity this sender is connected to. A scheduled message is enqueued and made available to receivers only at the scheduled enqueue time. 12.6.3. Azure ServiceBus Consumer operations Operation Description receiveMessages Receives an <b>infinite</b> stream of messages from the Service Bus entity. peekMessages Reads the batch of active messages without changing the state of the receiver or the message source. 12.6.3.1. Examples sendMessages from("direct:start") .process(exchange -> { final List<Object> inputBatch = new LinkedList<>(); inputBatch.add("test batch 1"); inputBatch.add("test batch 2"); inputBatch.add("test batch 3"); inputBatch.add(123456); exchange.getIn().setBody(inputBatch); }) .to("azure-servicebus:test//?connectionString=test") .to("mock:result"); scheduleMessages from("direct:start") .process(exchange -> { final List<Object> inputBatch = new LinkedList<>(); inputBatch.add("test batch 1"); inputBatch.add("test batch 2"); inputBatch.add("test batch 3"); inputBatch.add(123456); exchange.getIn().setHeader(ServiceBusConstants.SCHEDULED_ENQUEUE_TIME, OffsetDateTime.now()); exchange.getIn().setBody(inputBatch); }) .to("azure-servicebus:test//?connectionString=test&producerOperation=scheduleMessages") .to("mock:result"); receiveMessages from("azure-servicebus:test//?connectionString=test") .log("USD{body}") .to("mock:result"); peekMessages from("azure-servicebus:test//?connectionString=test&consumerOperation=peekMessages&peekNumMaxMessages=3") .log("USD{body}") .to("mock:result"); 12.7. Spring Boot Auto-Configuration The component supports 26 options, which are listed below. Name Description Default Type camel.component.azure-servicebus.amqp-retry-options Sets the retry options for Service Bus clients. If not specified, the default retry options are used. The option is a com.azure.core.amqp.AmqpRetryOptions type. AmqpRetryOptions camel.component.azure-servicebus.amqp-transport-type Sets the transport type by which all the communication with Azure Service Bus occurs. Default value is AmqpTransportType#AMQP. AmqpTransportType camel.component.azure-servicebus.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.azure-servicebus.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.azure-servicebus.client-options Sets the ClientOptions to be sent from the client built from this builder, enabling customization of certain properties, as well as support the addition of custom header information. Refer to the ClientOptions documentation for more information. The option is a com.azure.core.util.ClientOptions type. ClientOptions camel.component.azure-servicebus.configuration The component configurations. The option is a org.apache.camel.component.azure.servicebus.ServiceBusConfiguration type. ServiceBusConfiguration camel.component.azure-servicebus.connection-string Sets the connection string for a Service Bus namespace or a specific Service Bus resource. String camel.component.azure-servicebus.consumer-operation Sets the desired operation to be used in the consumer. ServiceBusConsumerOperationDefinition camel.component.azure-servicebus.disable-auto-complete Disables auto-complete and auto-abandon of received messages. By default, a successfully processed message is \\{link ServiceBusReceiverAsyncClient#complete(ServiceBusReceivedMessage) completed}. If an error happens when the message is processed, it is \\{link ServiceBusReceiverAsyncClient#abandon(ServiceBusReceivedMessage) abandoned}. false Boolean camel.component.azure-servicebus.enabled Whether to enable auto configuration of the azure-servicebus component. This is enabled by default. Boolean camel.component.azure-servicebus.fully-qualified-namespace Fully Qualified Namespace of the service bus. String camel.component.azure-servicebus.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.azure-servicebus.max-auto-lock-renew-duration Sets the amount of time to continue auto-renewing the lock. Setting Duration#ZERO or null disables auto-renewal. For \\{link ServiceBusReceiveMode#RECEIVE_AND_DELETE RECEIVE_AND_DELETE} mode, auto-renewal is disabled. The option is a java.time.Duration type. Duration camel.component.azure-servicebus.peek-num-max-messages Set the max number of messages to be peeked during the peek operation. Integer camel.component.azure-servicebus.prefetch-count Sets the prefetch count of the receiver. For both \\{link ServiceBusReceiveMode#PEEK_LOCK PEEK_LOCK} and \\{link ServiceBusReceiveMode#RECEIVE_AND_DELETE RECEIVE_AND_DELETE} modes the default value is 1. Prefetch speeds up the message flow by aiming to have a message readily available for local retrieval when and before the application asks for one using ServiceBusReceiverAsyncClient#receiveMessages(). Setting a non-zero value will prefetch that number of messages. Setting the value to zero turns prefetch off. Integer camel.component.azure-servicebus.producer-operation Sets the desired operation to be used in the producer. ServiceBusProducerOperationDefinition camel.component.azure-servicebus.proxy-options Sets the proxy configuration to use for ServiceBusSenderAsyncClient. When a proxy is configured, AmqpTransportType#AMQP_WEB_SOCKETS must be used for the transport type. The option is a com.azure.core.amqp.ProxyOptions type. ProxyOptions camel.component.azure-servicebus.receiver-async-client Sets the receiverAsyncClient in order to consume messages by the consumer. The option is a com.azure.messaging.servicebus.ServiceBusReceiverAsyncClient type. ServiceBusReceiverAsyncClient camel.component.azure-servicebus.scheduled-enqueue-time Sets OffsetDateTime at which the message should appear in the Service Bus queue or topic. The option is a java.time.OffsetDateTime type. OffsetDateTime camel.component.azure-servicebus.sender-async-client Sets SenderAsyncClient to be used in the producer. The option is a com.azure.messaging.servicebus.ServiceBusSenderAsyncClient type. ServiceBusSenderAsyncClient camel.component.azure-servicebus.service-bus-receive-mode Sets the receive mode for the receiver. ServiceBusReceiveMode camel.component.azure-servicebus.service-bus-transaction-context Represents transaction in service. This object just contains transaction id. The option is a com.azure.messaging.servicebus.ServiceBusTransactionContext type. ServiceBusTransactionContext camel.component.azure-servicebus.service-bus-type The service bus type of connection to execute. Queue is for typical queue option and topic for subscription based model. ServiceBusType camel.component.azure-servicebus.sub-queue Sets the type of the SubQueue to connect to. SubQueue camel.component.azure-servicebus.subscription-name Sets the name of the subscription in the topic to listen to. topicOrQueueName and serviceBusType=topic must also be set. This property is required if serviceBusType=topic and the consumer is in use. String camel.component.azure-servicebus.token-credential A TokenCredential for Azure AD authentication, implemented in com.azure.identity. The option is a com.azure.core.credential.TokenCredential type. TokenCredential
|
[
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-azure-servicebus-starter</artifactId> </dependency>",
"azure-servicebus:topicOrQueueName",
"from(\"direct:start\") .process(exchange -> { final List<Object> inputBatch = new LinkedList<>(); inputBatch.add(\"test batch 1\"); inputBatch.add(\"test batch 2\"); inputBatch.add(\"test batch 3\"); inputBatch.add(123456); exchange.getIn().setBody(inputBatch); }) .to(\"azure-servicebus:test//?connectionString=test\") .to(\"mock:result\");",
"from(\"direct:start\") .process(exchange -> { final List<Object> inputBatch = new LinkedList<>(); inputBatch.add(\"test batch 1\"); inputBatch.add(\"test batch 2\"); inputBatch.add(\"test batch 3\"); inputBatch.add(123456); exchange.getIn().setHeader(ServiceBusConstants.SCHEDULED_ENQUEUE_TIME, OffsetDateTime.now()); exchange.getIn().setBody(inputBatch); }) .to(\"azure-servicebus:test//?connectionString=test&producerOperation=scheduleMessages\") .to(\"mock:result\");",
"from(\"azure-servicebus:test//?connectionString=test\") .log(\"USD{body}\") .to(\"mock:result\");",
"from(\"azure-servicebus:test//?connectionString=test&consumerOperation=peekMessages&peekNumMaxMessages=3\") .log(\"USD{body}\") .to(\"mock:result\");"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-azure-servicebus-component-starter
|
Chapter 1. Creating an Oracle Cloud integration
|
Chapter 1. Creating an Oracle Cloud integration To add an Oracle Cloud account to cost management, you must add your Oracle Cloud account as an integration from the Red Hat Hybrid Cloud Console user interface and configure Oracle Cloud to provide metrics. After you add your Oracle Cloud account to cost management as a data integration, you must configure a function script to copy the cost and usage reports to a bucket that cost management can access. Prerequisites Red Hat account user with Cloud Administrator entitlements Access to Oracle Cloud Console with access to the compartment you want to add to cost management A service on Oracle Cloud generating service usage As you will complete some of the following steps in Oracle Cloud, and some steps in the Red Hat Hybrid Cloud Console , log in to both applications and keep them open in a web browser. To begin, add your Oracle Cloud integration to cost management from the Integrations page using the Add a cloud integration dialog. 1.1. Adding an Oracle Cloud Infrastructure account and naming your integration Add your Oracle Cloud account as a integration. After adding an Oracle Cloud integration, the cost management application processes the cost and usage data from your Oracle Cloud account and makes it viewable. Procedure From Red Hat Hybrid Cloud Console , click Settings Menu > Integrations . On the Settings page, click Integrations . In the Cloud tab, click Add integration . In the Add a cloud integration wizard, select Oracle Cloud Infrastructure as the integration type. Click . Enter a name for your integration and click . In the Select application step, select Cost management . Click . 1.2. Collecting and storing your global compartment-id Continue in the Add a cloud integration wizard by collecting your global compartment-id , which is also known as your tenant-id in Microsoft Azure, so cost management can access your Oracle Cloud compartment. Procedure In the Add a cloud integration wizard, on the Global compartment-id step, copy the command in step one oci iam compartment list . In a new tab, log in to your Oracle Cloud account. In the menu bar, click Developer tools Cloud Shell . Paste the command you copied from the Add a cloud integration wizard in the Cloud Shell window. In the response, copy the value pair for the compartment-id key. In the following example, the ID starts with ocid1.tenancy.oc1 . Example response { "data": [ { "compartment-id": "ocid1.tenancy.oc1..00000000000000000000000000000000000000000000", "defined-tags": { "Oracle-Tags": { ... } }, ... } ] } Return to the Global compartment-id step in the Add a cloud integration wizard, and paste your tenant-id in the Global compartment-id field. Click . 1.3. Creating a policy to create cost and usage reports Continue in the Add a cloud integration wizard by creating a custom policy and compartment for Oracle Cloud to create and store cost and usage reports. Procedure In the Add a cloud integration wizard, on the Create new policy and compartment page, copy the oci iam policy create command. Paste the command that you copied into the Cloud Shell in your Oracle Cloud tab to create a cost and usage reports policy. You can also add a policy description. Return to the Create new policy and compartment step in the Add a cloud integration wizard and copy the oci iam compartment create command. Paste the command that you copied into the Cloud Shell in your Oracle Cloud tab to create a cost management compartment. In the response, copy the value for the id key. In the following example, copy the id that includes ocid1.compartment.oc1 . Example response { "data": [ { "compartment-id": "tenant-id", "defined-tags": { "Oracle-Tags": { ... } }, "description": "Cost management compartment for cost and usage data", "freeform-tags": {}, "id": "ocid1.compartment.oc1..0000000000000000000000000000000000000000000", ... }, ... ] } Return to the Create new policy and compartment step in the Add a cloud integration wizard and paste the id value you copied from the response in the last step into the New compartment-id field. Click . 1.4. Creating a bucket for accessible cost and usage reports Create a bucket to store cost and usage reports that that cost management can access. Procedure In the Create bucket step, create a bucket to store cost and usage data so that cost management can access it. Copy the command from the step and paste into the Cloud Shell in your Oracle Cloud tab to create a bucket. Refer to the example response for the steps. Example response { "data": { ... "name": "cost-management", "namespace": "cost-management-namespace", ... } } Copy the value pair for the name key. In the example, this value is cost-management . Return to the Create bucket step in the Add a cloud integration wizard. Paste the value you copied into New data bucket name . Return to your Cloud Shell and copy the value for the namespace key. In the example, copy cost-management-namespace . Return to the Create bucket step in the Add a cloud integration wizard and check your shell prompt for your region. For example, your shell prompt might be user@cloudshell:~ (uk-london-1)USD . In this example, uk-london-1 is your region. Copy your region and return to the Create bucket step in the Add a cloud integration wizard. In the Create bucket step in the Add a cloud integration wizard, paste your region in New bucket region . Click . 1.5. Replicating reports to a bucket Schedule a task to regularly move the cost information to the bucket you created by creating a function and then a virtual machine to trigger it. In the Populate bucket step, visit the link to the script you can use to create a function that must be paired with a virtual machine or CronJob to run daily. The Oracle Cloud documentation provides the following example of how to schedule a recurring job to run the cost transfer script . Note As non-Red Hat products and documentation can change, instructions for configuring the third-party processes provided in this guide are general and correct at the time of publishing. Contact Oracle Cloud for support. Procedure In the Oracle Cloud console , open the Navigation menu and click Developer Services Functions . Use the following Python script to create a function application: # #
|
[
"{ \"data\": [ { \"compartment-id\": \"ocid1.tenancy.oc1..00000000000000000000000000000000000000000000\", \"defined-tags\": { \"Oracle-Tags\": { } }, } ] }",
"{ \"data\": [ { \"compartment-id\": \"tenant-id\", \"defined-tags\": { \"Oracle-Tags\": { } }, \"description\": \"Cost management compartment for cost and usage data\", \"freeform-tags\": {}, \"id\": \"ocid1.compartment.oc1..0000000000000000000000000000000000000000000\", }, ] }",
"{ \"data\": { \"name\": \"cost-management\", \"namespace\": \"cost-management-namespace\", } }",
"# Copyright 2022 Red Hat Inc. SPDX-License-Identifier: Apache-2.0 # ########################################################################################## Script to collect cost/usage reports from OCI and replicate them to another bucket # Pre-req's you must have a service account or other for this script to gain access to oci # NOTE! You must update the vars below for this script to work correctly # user: ocid of user that has correct permissions for bucket objects key_file: Location of auth file for defind user fingerprint: Users fingerprint tenancy: Tenancy for collecting/copying cost/usage reports region: Home Region of your tenancy bucket: Name of Bucket reports will be replicated to namespace: Object Storage Namespace filename: Name of json file to store last report downloaded default hre is fine ########################################################################################## import datetime import io import json import logging import oci from fdk import response def connect_oci_storage_client(config): # Connect to OCI SDK try: object_storage = oci.object_storage.ObjectStorageClient(config) return object_storage except (Exception, ValueError) as ex: logging.getLogger().info(\"Error connecting to OCI SDK CLIENT please check credentials: \" + str(ex)) def fetch_reports_file(object_storage, namespace, bucket, filename): # Fetch last download report file from bucket last_reports_file = None try: last_reports_file = object_storage.get_object(namespace, bucket, filename) except (Exception, ValueError) as ex: logging.getLogger().info(\"Object file does not exist, will attempt to create it: \" + str(ex)) if last_reports_file: json_acceptable_string = last_reports_file.data.text.replace(\"'\", '\"') try: last_reports = json.loads(json_acceptable_string) except (Exception, ValueError) as ex: logging.getLogger().info( \"Json string file not formatted correctly and cannont be parsed, creating fresh file. \" + str(ex) ) last_reports = {\"cost\": \"\", \"usage\": \"\"} else: last_reports = {\"cost\": \"\", \"usage\": \"\"} return last_reports def get_report_list(object_storage, reporting_namespace, reporting_bucket, prefix, last_file): # Create a list of reports report_list = object_storage.list_objects( reporting_namespace, reporting_bucket, prefix=prefix, start_after=last_file, fields=\"timeCreated\" ) logging.getLogger().info(\"Fetching list of cost csv files\") return report_list def copy_reports_to_bucket( object_storage, report_type, report_list, bucket, namespace, region, reporting_namespace, reporting_bucket, last_reports, ): # Iterate through cost reports list and copy them to new bucket # Start from current month start_from = datetime.date.today().replace(day=1) if report_list.data.objects != []: for report in report_list.data.objects: if report.time_created.date() > start_from: try: copy_object_details = oci.object_storage.models.CopyObjectDetails( destination_bucket=bucket, destination_namespace=namespace, destination_object_name=report.name, destination_region=region, source_object_name=report.name, ) object_storage.copy_object( namespace_name=reporting_namespace, bucket_name=reporting_bucket, copy_object_details=copy_object_details, ) except (Exception, ValueError) as ex: logging.getLogger().info(f\"Failed to copy {report.name} to bucket: {bucket}. \" + str(ex)) last_reports[report_type] = report.name else: logging.getLogger().info(f\"No new {report_type} reports to copy to bucket: {bucket}.\") return last_reports def handler(ctx, data: io.BytesIO = None): name = \"OCI-cost-mgmt-report-replication-function\" try: body = json.loads(data.getvalue()) name = body.get(\"name\") except (Exception, ValueError) as ex: logging.getLogger().info(\"Error parsing json payload: \" + str(ex)) logging.getLogger().info(\"Inside Python OCI reporting copy function\") # PLEASE CHANGE THIS!!!! # user = \"ocid1.user.oc1..aaaaaa\" # CHANGEME key_file = \"auth_files/service-account.pem\" # CHANGEME fingerprint = \"00.00.00\" # CHANGEME tenancy = \"ocid1.tenancy.oc1..aaaaaaa\" # CHANGEME region = \"region\" # CHANGEME bucket = \"cost-mgmt-bucket\" # CHANGEME namespace = \"namespace\" # CHANGEME filename = \"last_reports.json\" # Get the list of reports # https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/clienvironmentvariables.htm!!! config = { \"user\": user, \"key_file\": key_file, \"fingerprint\": fingerprint, \"tenancy\": tenancy, \"region\": region, } # The Object Storage namespace used for OCI reports is bling; the bucket name is the tenancy OCID. reporting_namespace = \"bling\" reporting_bucket = config[\"tenancy\"] region = config[\"region\"] # Connect to OCI object_storage = connect_oci_storage_client(config) # Grab reports json and set previously downloaded file values last_reports = fetch_reports_file(object_storage, namespace, bucket, filename) last_cost_file = last_reports.get(\"cost\") last_usage_file = last_reports.get(\"usage\") # Get list of cost/usage files cost_report_list = get_report_list( object_storage, reporting_namespace, reporting_bucket, \"reports/cost-csv\", last_cost_file ) usage_report_list = get_report_list( object_storage, reporting_namespace, reporting_bucket, \"reports/usage-csv\", last_usage_file ) # Copy cost/usage files to new bucket last_reports = copy_reports_to_bucket( object_storage, \"cost\", cost_report_list, bucket, namespace, region, reporting_namespace, reporting_bucket, last_reports, ) last_reports = copy_reports_to_bucket( object_storage, \"usage\", usage_report_list, bucket, namespace, region, reporting_namespace, reporting_bucket, last_reports, ) # Save updated filenames to bucket object as string object_storage.put_object(namespace, bucket, filename, str(last_reports)) return response.Response( ctx, response_data=json.dumps( { \"message\": \"Last reports saved from {}, Cost: {}, Usage: {}\".format( name, last_reports[\"cost\"], last_reports[\"usage\"] ) } ), headers={\"Content-Type\": \"application/json\"}, )",
"user = \"ocid1.user.oc1..aaaaaa\" # CHANGEME key_file = \"auth_files/service-account.pem\" # CHANGEME fingerprint = \"00.00.00\" # CHANGEME tenancy = \"ocid1.tenancy.oc1..aaaaaaa\" # CHANGEME region = \"region\" # CHANGEME bucket = \"cost-mgmt-bucket\" # CHANGEME namespace = \"namespace\" # CHANGEME filename = \"last_reports.json\""
] |
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/integrating_oracle_cloud_data_into_cost_management/assembly-adding-oci-int
|
Chapter 6. Networking dashboards
|
Chapter 6. Networking dashboards Networking metrics are viewable in dashboards within the OpenShift Container Platform web console, under Observe Dashboards . 6.1. Network Observability Operator If you have the Network Observability Operator installed, you can view network traffic metrics dashboards by selecting the Netobserv dashboard from the Dashboards drop-down list. For more information about metrics available in this Dashboard , see Network Observability metrics dashboards . 6.2. Networking and OVN-Kubernetes dashboard You can view both general networking metrics as well as OVN-Kubernetes metrics from the dashboard. To view general networking metrics, select Networking/Linux Subsystem Stats from the Dashboards drop-down list. You can view the following networking metrics from the dashboard: Network Utilisation , Network Saturation , and Network Errors . To view OVN-Kubernetes metrics select Networking/Infrastructure from the Dashboards drop-down list. You can view the following OVN-Kuberenetes metrics: Networking Configuration , TCP Latency Probes , Control Plane Resources , and Worker Resources . 6.3. Ingress Operator dashboard You can view networking metrics handled by the Ingress Operator from the dashboard. This includes metrics like the following: Incoming and outgoing bandwidth HTTP error rates HTTP server response latency To view these Ingress metrics, select Networking/Ingress from the Dashboards drop-down list. You can view Ingress metrics for the following categories: Top 10 Per Route , Top 10 Per Namespace , and Top 10 Per Shard .
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/networking/networking-dashboards_networking-operators-overview
|
D.9. Description View
|
D.9. Description View The Description View provides a means to display and edit (add, change or remove) a description for any model or model object. To show the Description View , click Window > Show View > Other... to display the Eclipse Show View dialog. Click Teiid Designer > Description view and then click OK . Figure D.18. Description View You can click the edit description action in the toolbar or right-click select Edit in the context menu to bring up the Edit Description dialog. Figure D.19. Description View Context Menu Figure D.20. Edit Description Dialog
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/description_view
|
A.3. Explanation of Settings in the New Network Interface and Edit Network Interface Windows
|
A.3. Explanation of Settings in the New Network Interface and Edit Network Interface Windows These settings apply when you are adding or editing a virtual machine network interface. If you have more than one network interface attached to a virtual machine, you can put the virtual machine on more than one logical network. Table A.20. Network Interface Settings Field Name Description Power cycle required? Name The name of the network interface. This text field has a 21-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores. No. Profile The vNIC profile and logical network that the network interface is placed on. By default, all network interfaces are put on the ovirtmgmt management network. No. Type The virtual interface the network interface presents to virtual machines. rtl8139 and e1000 device drivers are included in most operating systems. VirtIO is faster but requires VirtIO drivers. Red Hat Enterprise Linux 5 and later include VirtIO drivers. Windows does not include VirtIO drivers, but they can be installed from the guest tools ISO or virtual floppy disk. PCI Passthrough enables the vNIC to be directly connected to a virtual function (VF) of an SR-IOV-enabled NIC. The vNIC will then bypass the software network virtualization and connect directly to the VF for direct device assignment. The selected vNIC profile must also have Passthrough enabled. Yes. Custom MAC address Choose this option to set a custom MAC address. The Red Hat Virtualization Manager automatically generates a MAC address that is unique to the environment to identify the network interface. Having two devices with the same MAC address online in the same network causes networking conflicts. Yes. Link State Whether or not the network interface is connected to the logical network. Up : The network interface is located on its slot. When the Card Status is Plugged , it means the network interface is connected to a network cable, and is active. When the Card Status is Unplugged , the network interface will automatically be connected to the network and become active once plugged. Down : The network interface is located on its slot, but it is not connected to any network. Virtual machines will not be able to run in this state. No. Card Status Whether or not the network interface is defined on the virtual machine. Plugged : The network interface has been defined on the virtual machine. If its Link State is Up , it means the network interface is connected to a network cable, and is active. If its Link State is Down , the network interface is not connected to a network cable. Unplugged : The network interface is only defined on the Manager, and is not associated with a virtual machine. If its Link State is Up , when the network interface is plugged it will automatically be connected to a network and become active. If its Link State is Down , the network interface is not connected to any network until it is defined on a virtual machine. No.
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/virtual_machine_network_interface_dialogue_entries
|
Chapter 29. Upgrading Your Current System
|
Chapter 29. Upgrading Your Current System The procedure for performing an in-place upgrade on your current system is handled by the following utilities: The Preupgrade Assistant , which is a diagnostics utility that assesses your current system and identifies potential problems you might encounter during or after the upgrade. The Red Hat Upgrade Tool utility, which is used to upgrade a system from Red Hat Enterprise Linux version 6 to version 7. Note In-place upgrades are currently only supported on AMD64 and Intel 64 ( x86_64 ) systems and on IBM Z ( s390x ). Additionally, only the Server variant can be upgraded with Red Hat Upgrade Tool . Full documentation covering the process of upgrading from an earlier release of Red Hat Enterprise Linux to Red Hat Enterprise Linux 7 is available in the Red Hat Enterprise Linux 7 Migration Planning Guide . You can also use the Red Hat Enterprise Linux Upgrade Helper to guide you through migration from Red Hat Enterprise Linux 6 to 7.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/chap-upgrading-your-current-system
|
7.23. cifs-utils
|
7.23. cifs-utils 7.23.1. RHBA-2013:0408 - cifs-utils bug fix and enhancement update Updated cifs-utils packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The SMB/CIFS protocol is a standard file sharing protocol widely deployed on Microsoft Windows machines. This package contains tools for mounting shares on Linux using the SMB/CIFS protocol. The tools in this package work in conjunction with support in the kernel to allow one to mount a SMB/CIFS share onto a client and use it as if it were a standard Linux file system. Bug Fixes BZ#856729 When the mount.cifs utility ran out of addresses to try, it returned the "System error" error code (EX_SYSERR) to the caller service. The utility has been modified and it now correctly returns the "Mount failure" error code (EX_FAIL). BZ#826825 Typically, "/" characters are not allowed in user names for Microsoft Windows systems, but they are common in certain types of kerberos principal names. However, mount.cifs previously allowed the use of "/" in user names, which caused attempts to mount CIFS file systems to fail. With this package, "/" characters are now allowed in user names if the "sec=krb5" or "sec=krb5i" mount options are specified, thus CIFS file systems can now be mounted as expected. BZ# 838606 Previously, the cifs-utils packages were compiled without the RELRO (read-only relocations) and PIE (Position Independent Executables) flags. Programs provided by this package could be vulnerable to various attacks based on overwriting the ELF section of a program. The "-pie" and "-fpie" options enable the building of position-independent executables, and the "-Wl","-z","relro" turns on read-only relocation support in gcc. These options are important for security purposes to guard against possible buffer overflows that lead to exploits. The cifs-utils binaries are now built with PIE and full RELRO support. The cifs-utils binary is now more secured against "return-to-text" and memory corruption attacks and also against attacks based on the program's ELF section overwriting. Enhancements BZ#843596 With this update, the "strictcache", "actimeo", "cache=" and "rwpidforward" mount options are now documented in the mount.cifs(8) manual page. BZ#843612 The "getcifsacl", "setcifsacl" and "cifs.idmap" programs have been added to the package. These utilities allow users to manipulate ACLs on CIFS shares and allow the mapping of Windows security IDs to POSIX user and group IDs. BZ#843617 With this update, the cifs.idmap helper, which allows SID to UID and SID to GID mapping, has been added to the package. Also, the manual page cifs.upcall(8) has been updated and cifs.idmap(8) has been added. Users of cifs-utils are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/cifs-utils
|
3.4. Renewing certificates before they expire
|
3.4. Renewing certificates before they expire In Red Hat Virtualization earlier than version 4.4 SP1, all certificates followed a 398 day lifetime. Starting in Red Hat Virtualization version 4.4 SP1, the self-signed internal certificates between hypervisors and the Manager follow a five year lifetime. Certificates visible to web browsers still follow the standard 398 day lifetime and must be renewed once per year. Warning Do not let certificates expire. If they expire, the host and Manager stop responding, and recovery is an error-prone and time-consuming process. Procedure Renew the host certificates: In the Administration Portal, click Compute Hosts . Click Management Maintenance and then click OK . The virtual machines should automatically migrate away from the host. If they are pinned or otherwise cannot be migrated, you must shut them down. When the host is in maintenance mode and there are no more virtual machines remaining on this host, click Installation Enroll Certificate . When enrollment is complete, click Management Activate . Renew the Manager certificates: Self-hosted engine only: log in to the host and put it in global maintenance mode. Self-hosted engine and standalone Manager: log in to the Manager and run engine-setup . The engine-setup script prompts you with configuration questions. Respond to the questions as appropriate or use an answers file. Enter Yes after the following engine-setup prompt: Self-hosted engine only: log in to the host and disable global maintenance mode: Additional resources How to manually renew RHV host SSL certificate if expired?
|
[
"hosted-engine --set-maintenance --mode=global",
"engine-setup --offline",
"Renew certificates? (Yes, No) [Yes]:",
"hosted-engine --set-maintenance --mode=none"
] |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/chap-renewing_certificates_rhv_backup_restore
|
Chapter 1. Introduction to developing applications with Red Hat build of Apache Camel for Quarkus
|
Chapter 1. Introduction to developing applications with Red Hat build of Apache Camel for Quarkus This guide is for developers writing Camel applications on top of Red Hat build of Apache Camel for Quarkus. Camel components which are supported in Red Hat build of Apache Camel for Quarkus have an associated Red Hat build of Apache Camel for Quarkus extension. For more information about the Red Hat build of Apache Camel for Quarkus extensions supported in this distribution, see the Red Hat build of Apache Camel for Quarkus Reference reference guide.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/developing_applications_with_red_hat_build_of_apache_camel_for_quarkus/introduction_to_developing_applications_with_red_hat_build_of_apache_camel_for_quarkus
|
Chapter 11. Managing User Accounts
|
Chapter 11. Managing User Accounts This chapter covers general management and configuration of user accounts. 11.1. Setting up User Home Directories It is recommended that every user has a home directory configured. The default expected location for user home directories is in the /home/ directory. For example, IdM expects a user with the user_login login to have a home directory set up at /home/ user_login . Note You can change the default expected location for user home directories using the ipa config-mod command. IdM does not automatically create home directories for users. However, you can configure a PAM home directory module to create a home directory automatically when a user logs in. Alternatively, you can add home directories manually using NFS shares and the automount utility. 11.1.1. Mounting Home Directories Automatically Using the PAM Home Directory Module Supported PAM Home Directory Modules To configure a PAM home directory module to create home directories for users automatically when they log in to the IdM domain, use one of the following PAM modules: pam_oddjob_mkhomedir pam_mkhomedir IdM first attempts to use pam_oddjob_mkhomedir . If this module is not installed, IdM attempts to use pam_mkhomedir instead. Note Auto-creating home directories for new users on an NFS share is not supported. Configuring the PAM Home Directory Module Enabling the PAM home directory module has local effect. Therefore, you must enable the module individually on each client and server where it is required. To configure the module during the installation of the server or client, use the --mkhomedir option with the ipa-server-install or ipa-client-install utility when installing the machine. To configure the module on an already installed server or client, use the authconfig utility. For example: For more information on using authconfig to create home directories, see the System-Level Authentication Guide . 11.1.2. Mounting Home Directories Manually You can use an NFS file server to provide a /home/ directory that will be available to all machines in the IdM domain, and then mount the directory on an IdM machine using the automount utility. Potential Problems When Using NFS Using NFS can potentially have negative impact on performance and security. For example, using NFS can lead to security vulnerabilities resulting from granting root access to the NFS user, performance issues with loading the entire /home/ directory tree, or network performance issues for using remote servers for home directories. To reduce the effect of these problems, it is recommended to follow these guidelines: Use automount to mount only the user's home directory and only when the user logs in. Do not use it to load the entire /home/ tree. Use a remote user who has limited permissions to create home directories, and mount the share on the IdM server as this user. Because the IdM server runs as an httpd process, it is possible to use sudo or a similar program to grant limited access to the IdM server to create home directories on the NFS server. Configuring Home Directories Using NFS and automount To manually add home directories to the IdM server from separate locations using NFS shares and automount : Create a new location for the user directory maps. Add a direct mapping to the new location's auto.direct file. The auto.direct file is the automount map automatically created by the ipa-server-install utility. In the following example, the mount point is /share : For more details on using automount with IdM, see Chapter 34, Using Automount .
|
[
"authconfig --enablemkhomedir --update",
"ipa automountlocation-add userdirs Location: userdirs",
"ipa automountkey-add userdirs auto.direct --key=/share --info=\"-ro,soft, server.example.com:/home/share\" Key: /share Mount information: -ro,soft, server.example.com:/home/share"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/users
|
function::task_cwd_path
|
function::task_cwd_path Name function::task_cwd_path - get the path struct pointer for a task's current working directory Synopsis Arguments task task_struct pointer.
|
[
"task_cwd_path:long(task:long)"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-task-cwd-path
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate and prioritize your feedback regarding our documentation. Provide as much detail as possible, so that your request can be quickly addressed. Prerequisites You are logged in to the Red Hat Customer Portal. Procedure To provide feedback, perform the following steps: Click the following link: Create Issue Describe the issue or enhancement in the Summary text box. Provide details about the issue or requested enhancement in the Description text box. Type your name in the Reporter text box. Click the Create button. This action creates a documentation ticket and routes it to the appropriate documentation team. Thank you for taking the time to provide feedback.
| null |
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/generating_vulnerability_service_reports_with_fedramp/proc-providing-feedback-on-redhat-documentation
|
Chapter 1. Template APIs
|
Chapter 1. Template APIs 1.1. BrokerTemplateInstance [template.openshift.io/v1] Description BrokerTemplateInstance holds the service broker-related state associated with a TemplateInstance. BrokerTemplateInstance is part of an experimental API. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. PodTemplate [v1] Description PodTemplate describes a template for creating copies of a predefined pod. Type object 1.3. Template [template.openshift.io/v1] Description Template contains the inputs needed to produce a Config. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.4. TemplateInstance [template.openshift.io/v1] Description TemplateInstance requests and records the instantiation of a Template. TemplateInstance is part of an experimental API. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/template_apis/template-apis
|
A.2. Explanation of Settings in the Run Once Window
|
A.2. Explanation of Settings in the Run Once Window The Run Once window defines one-off boot options for a virtual machine. For persistent boot options, use the Boot Options tab in the New Virtual Machine window. The Run Once window contains multiple sections that can be configured. The standalone Rollback this configuration during reboots check box specifies whether reboots (initiated by the Manager, or from within the guest) will be warm (soft) or cold (hard). Select this check box to configure a cold reboot that restarts the virtual machine with regular (non- Run Once ) configuration. Clear this check box to configure a warm reboot that retains the virtual machine's Run Once configuration. The Boot Options section defines the virtual machine's boot sequence, running options, and source images for installing the operating system and required drivers. Note The following tables do not include information on whether a power cycle is required because these one-off boot options apply only when you reboot the virtual machine. Table A.13. Boot Options Section Field Name Description Attach Floppy Attaches a diskette image to the virtual machine. Use this option to install Windows drivers. The diskette image must reside in the ISO domain. Attach CD Attaches an ISO image to the virtual machine. Use this option to install the virtual machine's operating system and applications. The CD image must reside in the ISO domain. Enable menu to select boot device Enables a menu to select the boot device. After the virtual machine starts and connects to the console, but before the virtual machine starts booting, a menu displays that allows you to select the boot device. This option should be enabled before the initial boot to allow you to select the required installation media. Start in Pause Mode Starts and then pauses the virtual machine to enable connection to the console. Suitable for virtual machines in remote locations. Predefined Boot Sequence Determines the order in which the boot devices are used to boot the virtual machine. Select Hard Disk , CD-ROM , or Network (PXE) , and use Up and Down to move the option up or down in the list. Run Stateless Deletes all data and configuration changes to the virtual machine upon shutdown. This option is only available if a virtual disk is attached to the virtual machine. The Linux Boot Options section contains fields to boot a Linux kernel directly instead of through the BIOS bootloader. Table A.14. Linux Boot Options Section Field Name Description kernel path A fully qualified path to a kernel image to boot the virtual machine. The kernel image must be stored on either the ISO domain (path name in the format of iso://path-to-image ) or on the host's local storage domain (path name in the format of /data/images ). initrd path A fully qualified path to a ramdisk image to be used with the previously specified kernel. The ramdisk image must be stored on the ISO domain (path name in the format of iso://path-to-image ) or on the host's local storage domain (path name in the format of /data/images ). kernel parameters Kernel command line parameter strings to be used with the defined kernel on boot. The Initial Run section is used to specify whether to use Cloud-Init or Sysprep to initialize the virtual machine. For Linux-based virtual machines, you must select the Use Cloud-Init check box in the Initial Run tab to view the available options. For Windows-based virtual machines, you must attach the [sysprep] floppy by selecting the Attach Floppy check box in the Boot Options tab and selecting the floppy from the list. The options that are available in the Initial Run section differ depending on the operating system that the virtual machine is based on. Table A.15. Initial Run Section (Linux-based Virtual Machines) Field Name Description VM Hostname The host name of the virtual machine. Configure Time Zone The time zone for the virtual machine. Select this check box and select a time zone from the Time Zone list. Authentication The authentication details for the virtual machine. Click the disclosure arrow to display the settings for this option. Authentication User Name Creates a new user account on the virtual machine. If this field is not filled in, the default user is root . Authentication Use already configured password This check box is automatically selected after you specify an initial root password. You must clear this check box to enable the Password and Verify Password fields and specify a new password. Authentication Password The root password for the virtual machine. Enter the password in this text field and the Verify Password text field to verify the password. Authentication SSH Authorized Keys SSH keys to be added to the authorized keys file of the virtual machine. Authentication Regenerate SSH Keys Regenerates SSH keys for the virtual machine. Networks Network-related settings for the virtual machine. Click the disclosure arrow to display the settings for this option. Networks DNS Servers The DNS servers to be used by the virtual machine. Networks DNS Search Domains The DNS search domains to be used by the virtual machine. Networks Network Configures network interfaces for the virtual machine. Select this check box and click + or - to add or remove network interfaces to or from the virtual machine. When you click + , a set of fields becomes visible that can specify whether to use DHCP, and configure an IP address, netmask, and gateway, and specify whether the network interface will start on boot. Custom Script Custom scripts that will be run on the virtual machine when it starts. The scripts entered in this field are custom YAML sections that are added to those produced by the Manager, and allow you to automate tasks such as creating users and files, configuring yum repositories and running commands. For more information on the format of scripts that can be entered in this field, see the Custom Script documentation. Table A.16. Initial Run Section (Windows-based Virtual Machines) Field Name Description VM Hostname The host name of the virtual machine. Domain The Active Directory domain to which the virtual machine belongs. Organization Name The name of the organization to which the virtual machine belongs. This option corresponds to the text field for setting the organization name displayed when a machine running Windows is started for the first time. Active Directory OU The organizational unit in the Active Directory domain to which the virtual machine belongs. The distinguished name must be provided. For example CN=Users,DC=lab,DC=local Configure Time Zone The time zone for the virtual machine. Select this check box and select a time zone from the Time Zone list. Admin Password The administrative user password for the virtual machine. Click the disclosure arrow to display the settings for this option. Admin Password Use already configured password This check box is automatically selected after you specify an initial administrative user password. You must clear this check box to enable the Admin Password and Verify Admin Password fields and specify a new password. Admin Password Admin Password The administrative user password for the virtual machine. Enter the password in this text field and the Verify Admin Password text field to verify the password. Custom Locale Locales must be in a format such as en-US . Click the disclosure arrow to display the settings for this option. Custom Locale Input Locale The locale for user input. Custom Locale UI Language The language used for user interface elements such as buttons and menus. Custom Locale System Locale The locale for the overall system. Custom Locale User Locale The locale for users. Sysprep A custom Sysprep definition. The definition must be in the format of a complete unattended installation answer file. You can copy and paste the default answer files in the /usr/share/ovirt-engine/conf/sysprep/ directory on the machine on which the Red Hat Virtualization Manager is installed and alter the fields as required. The definition will overwrite any values entered in the Initial Run fields. See Chapter 7, Templates for more information. Domain The Active Directory domain to which the virtual machine belongs. If left blank, the value of the Domain field is used. Alternate Credentials Selecting this check box allows you to set a User Name and Password as alternative credentials. The System section enables you to define the supported machine type or CPU type. Table A.17. System Section Field Name Description Custom Emulated Machine This option allows you to specify the machine type. If changed, the virtual machine will only run on hosts that support this machine type. Defaults to the cluster's default machine type. Custom CPU Type This option allows you to specify a CPU type. If changed, the virtual machine will only run on hosts that support this CPU type. Defaults to the cluster's default CPU type. The Host section is used to define the virtual machine's host. Table A.18. Host Section Field Name Description Any host in cluster Allocates the virtual machine to any available host. Specific Host(s) Specifies a user-defined host for the virtual machine. The Console section defines the protocol to connect to virtual machines. Table A.19. Console Section Field Name Description Headless Mode Select this option if you do not require a graphical console when running the machine for the first time. See Section 4.9, "Configuring Headless Virtual Machines" for more information. VNC Requires a VNC client to connect to a virtual machine using VNC. Optionally, specify VNC Keyboard Layout from the drop-down list. SPICE Recommended protocol for Linux and Windows virtual machines. Using SPICE protocol without QXL drivers is supported for Windows 8 and Server 2012 virtual machines; however, support for multiple monitors and graphics acceleration is not available for this configuration. Enable SPICE file transfer Determines whether you can drag and drop files from an external host into the virtual machine's SPICE console. This option is only available for virtual machines using the SPICE protocol. This check box is selected by default. Enable SPICE clipboard copy and paste Defines whether you can copy and paste content from an external host into the virtual machine's SPICE console. This option is only available for virtual machines using the SPICE protocol. This check box is selected by default. The Custom Properties section contains additional VDSM options for running virtual machines. See Table A.10, "Virtual Machine Custom Properties Settings" for details.
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/Virtual_Machine_Run_Once_settings_explained
|
Preface
|
Preface Important The following proof of concept deployment method is unsupported for production purposes. This deployment type uses local storage. Local storage is not guaranteed to provide the required read-after-write consistency and data integrity guarantees during parallel access that a storage registry like Red Hat Quay requires. Do not use this deployment type for production purposes. Use it for testing purposes only. Red Hat Quay is an enterprise-quality registry for building, securing and serving container images. The documents in this section detail how to deploy Red Hat Quay for proof of concept , or non-production, purposes. The primary objectives of this document includes the following: How to deploy Red Hat Quay for basic non-production purposes. Asses Red Hat Quay's container image management, including how to push, pull, tag, and organize images. Explore availability and scalability. How to deploy an advanced Red Hat Quay proof of concept deployment using SSL/TLS certificates. Beyond the primary objectives of this document, a proof of concept deployment can be used to test various features offered by Red Hat Quay, such as establishing superusers, setting repository quota limitations, enabling Splunk for action log storage, enabling Clair for vulnerability reporting, and more. See the " steps" section for a list of some of the features available after you have followed this guide. This proof of concept deployment procedure can be followed on a single machine, either physical or virtual.
| null |
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/proof_of_concept_-_deploying_red_hat_quay/pr01
|
Chapter 2. Preparing to install a cluster that uses SR-IOV or OVS-DPDK on OpenStack
|
Chapter 2. Preparing to install a cluster that uses SR-IOV or OVS-DPDK on OpenStack Before you install a OpenShift Container Platform cluster that uses single-root I/O virtualization (SR-IOV) or Open vSwitch with the Data Plane Development Kit (OVS-DPDK) on Red Hat OpenStack Platform (RHOSP), you must understand the requirements for each technology and then perform preparatory tasks. 2.1. Requirements for clusters on RHOSP that use either SR-IOV or OVS-DPDK If you use SR-IOV or OVS-DPDK with your deployment, you must meet the following requirements: RHOSP compute nodes must use a flavor that supports huge pages. 2.1.1. Requirements for clusters on RHOSP that use SR-IOV To use single-root I/O virtualization (SR-IOV) with your deployment, you must meet the following requirements: Plan your Red Hat OpenStack Platform (RHOSP) SR-IOV deployment . OpenShift Container Platform must support the NICs that you use. For a list of supported NICs, see "About Single Root I/O Virtualization (SR-IOV) hardware networks" in the "Hardware networks" subsection of the "Networking" documentation. For each node that will have an attached SR-IOV NIC, your RHOSP cluster must have: One instance from the RHOSP quota One port attached to the machines subnet One port for each SR-IOV Virtual Function A flavor with at least 16 GB memory, 4 vCPUs, and 25 GB storage space SR-IOV deployments often employ performance optimizations, such as dedicated or isolated CPUs. For maximum performance, configure your underlying RHOSP deployment to use these optimizations, and then run OpenShift Container Platform compute machines on the optimized infrastructure. For more information about configuring performant RHOSP compute nodes, see Configuring Compute nodes for performance . 2.1.2. Requirements for clusters on RHOSP that use OVS-DPDK To use Open vSwitch with the Data Plane Development Kit (OVS-DPDK) with your deployment, you must meet the following requirements: Plan your Red Hat OpenStack Platform (RHOSP) OVS-DPDK deployment by referring to Planning your OVS-DPDK deployment in the Network Functions Virtualization Planning and Configuration Guide. Configure your RHOSP OVS-DPDK deployment according to Configuring an OVS-DPDK deployment in the Network Functions Virtualization Planning and Configuration Guide. 2.2. Preparing to install a cluster that uses SR-IOV You must configure RHOSP before you install a cluster that uses SR-IOV on it. When installing a cluster using SR-IOV, you must deploy clusters using cgroup v1. For more information, Enabling Linux control group version 1 (cgroup v1) . 2.2.1. Creating SR-IOV networks for compute machines If your Red Hat OpenStack Platform (RHOSP) deployment supports single root I/O virtualization (SR-IOV) , you can provision SR-IOV networks that compute machines run on. Note The following instructions entail creating an external flat network and an external, VLAN-based network that can be attached to a compute machine. Depending on your RHOSP deployment, other network types might be required. Prerequisites Your cluster supports SR-IOV. Note If you are unsure about what your cluster supports, review the OpenShift Container Platform SR-IOV hardware networks documentation. You created radio and uplink provider networks as part of your RHOSP deployment. The names radio and uplink are used in all example commands to represent these networks. Procedure On a command line, create a radio RHOSP network: USD openstack network create radio --provider-physical-network radio --provider-network-type flat --external Create an uplink RHOSP network: USD openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external Create a subnet for the radio network: USD openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio Create a subnet for the uplink network: USD openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink 2.3. Preparing to install a cluster that uses OVS-DPDK You must configure RHOSP before you install a cluster that uses SR-IOV on it. Complete Creating a flavor and deploying an instance for OVS-DPDK before you install a cluster on RHOSP. After you perform preinstallation tasks, install your cluster by following the most relevant OpenShift Container Platform on RHOSP installation instructions. Then, perform the tasks under " steps" on this page. 2.4. steps For either type of deployment: Configure the Node Tuning Operator with huge pages support . To complete SR-IOV configuration after you deploy your cluster: Install the SR-IOV Operator . Configure your SR-IOV network device . Create SR-IOV compute machines . Consult the following references after you deploy your cluster to improve its performance: A test pod template for clusters that use OVS-DPDK on OpenStack . A test pod template for clusters that use SR-IOV on OpenStack . A performance profile template for clusters that use OVS-DPDK on OpenStack .
|
[
"openstack network create radio --provider-physical-network radio --provider-network-type flat --external",
"openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external",
"openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio",
"openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_openstack/installing-openstack-nfv-preparing
|
Chapter 16. Cluster Observability Operator
|
Chapter 16. Cluster Observability Operator 16.1. Cluster Observability Operator release notes Important The Cluster Observability Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The Cluster Observability Operator (COO) is an optional OpenShift Container Platform Operator that enables administrators to create standalone monitoring stacks that are independently configurable for use by different services and users. The COO complements the built-in monitoring capabilities of OpenShift Container Platform. You can deploy it in parallel with the default platform and user workload monitoring stacks managed by the Cluster Monitoring Operator (CMO). These release notes track the development of the Cluster Observability Operator in OpenShift Container Platform. 16.1.1. Cluster Observability Operator 0.1.1 This release updates the Cluster Observability Operator to support installing the Operator in restricted networks or disconnected environments. 16.1.2. Cluster Observability Operator 0.1 This release makes a Technology Preview version of the Cluster Observability Operator available on OperatorHub. 16.2. Cluster Observability Operator overview Important The Cluster Observability Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The Cluster Observability Operator (COO) is an optional component of the OpenShift Container Platform. You can deploy it to create standalone monitoring stacks that are independently configurable for use by different services and users. The COO deploys the following monitoring components: Prometheus Thanos Querier (optional) Alertmanager (optional) The COO components function independently of the default in-cluster monitoring stack, which is deployed and managed by the Cluster Monitoring Operator (CMO). Monitoring stacks deployed by the two Operators do not conflict. You can use a COO monitoring stack in addition to the default platform monitoring components deployed by the CMO. 16.2.1. Understanding the Cluster Observability Operator A default monitoring stack created by the Cluster Observability Operator (COO) includes a highly available Prometheus instance capable of sending metrics to an external endpoint by using remote write. Each COO stack also includes an optional Thanos Querier component, which you can use to query a highly available Prometheus instance from a central location, and an optional Alertmanager component, which you can use to set up alert configurations for different services. 16.2.1.1. Advantages of using the Cluster Observability Operator The MonitoringStack CRD used by the COO offers an opinionated default monitoring configuration for COO-deployed monitoring components, but you can customize it to suit more complex requirements. Deploying a COO-managed monitoring stack can help meet monitoring needs that are difficult or impossible to address by using the core platform monitoring stack deployed by the Cluster Monitoring Operator (CMO). A monitoring stack deployed using COO has the following advantages over core platform and user workload monitoring: Extendability Users can add more metrics to a COO-deployed monitoring stack, which is not possible with core platform monitoring without losing support. In addition, COO-managed stacks can receive certain cluster-specific metrics from core platform monitoring by using federation. Multi-tenancy support The COO can create a monitoring stack per user namespace. You can also deploy multiple stacks per namespace or a single stack for multiple namespaces. For example, cluster administrators, SRE teams, and development teams can all deploy their own monitoring stacks on a single cluster, rather than having to use a single shared stack of monitoring components. Users on different teams can then independently configure features such as separate alerts, alert routing, and alert receivers for their applications and services. Scalability You can create COO-managed monitoring stacks as needed. Multiple monitoring stacks can run on a single cluster, which can facilitate the monitoring of very large clusters by using manual sharding. This ability addresses cases where the number of metrics exceeds the monitoring capabilities of a single Prometheus instance. Flexibility Deploying the COO with Operator Lifecycle Manager (OLM) decouples COO releases from OpenShift Container Platform release cycles. This method of deployment enables faster release iterations and the ability to respond rapidly to changing requirements and issues. Additionally, by deploying a COO-managed monitoring stack, users can manage alerting rules independently of OpenShift Container Platform release cycles. Highly customizable The COO can delegate ownership of single configurable fields in custom resources to users by using Server-Side Apply (SSA), which enhances customization. Additional resources Kubernetes documentation for Server-Side Apply (SSA) 16.3. Installing the Cluster Observability Operator Important The Cluster Observability Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . As a cluster administrator, you can install the Cluster Observability Operator (COO) from OperatorHub by using the OpenShift Container Platform web console or CLI. OperatorHub is a user interface that works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster. To install the COO using OperatorHub, follow the procedure described in Adding Operators to a cluster . 16.3.1. Uninstalling the Cluster Observability Operator using the web console If you have installed the Cluster Observability Operator (COO) by using OperatorHub, you can uninstall it in the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have logged in to the OpenShift Container Platform web console. Procedure Go to Operators Installed Operators . Locate the Cluster Observability Operator entry in the list. Click for this entry and select Uninstall Operator . 16.4. Configuring the Cluster Observability Operator to monitor a service Important The Cluster Observability Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can monitor metrics for a service by configuring monitoring stacks managed by the Cluster Observability Operator (COO). To test monitoring a service, follow these steps: Deploy a sample service that defines a service endpoint. Create a ServiceMonitor object that specifies how the service is to be monitored by the COO. Create a MonitoringStack object to discover the ServiceMonitor object. 16.4.1. Deploying a sample service for Cluster Observability Operator This configuration deploys a sample service named prometheus-coo-example-app in the user-defined ns1-coo project. The service exposes the custom version metric. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with administrative permissions for the namespace. Procedure Create a YAML file named prometheus-coo-example-app.yaml that contains the following configuration details for a namespace, deployment, and service: apiVersion: v1 kind: Namespace metadata: name: ns1-coo --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-coo-example-app name: prometheus-coo-example-app namespace: ns1-coo spec: replicas: 1 selector: matchLabels: app: prometheus-coo-example-app template: metadata: labels: app: prometheus-coo-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.2 imagePullPolicy: IfNotPresent name: prometheus-coo-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-coo-example-app name: prometheus-coo-example-app namespace: ns1-coo spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-coo-example-app type: ClusterIP Save the file. Apply the configuration to the cluster by running the following command: USD oc apply -f prometheus-coo-example-app.yaml Verify that the pod is running by running the following command and observing the output: USD oc -n -ns1-coo get pod Example output NAME READY STATUS RESTARTS AGE prometheus-coo-example-app-0927545cb7-anskj 1/1 Running 0 81m 16.4.2. Specifying how a service is monitored by Cluster Observability Operator To use the metrics exposed by the sample service you created in the "Deploying a sample service for Cluster Observability Operator" section, you must configure monitoring components to scrape metrics from the /metrics endpoint. You can create this configuration by using a ServiceMonitor object that specifies how the service is to be monitored, or a PodMonitor object that specifies how a pod is to be monitored. The ServiceMonitor object requires a Service object. The PodMonitor object does not, which enables the MonitoringStack object to scrape metrics directly from the metrics endpoint exposed by a pod. This procedure shows how to create a ServiceMonitor object for a sample service named prometheus-coo-example-app in the ns1-coo namespace. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with administrative permissions for the namespace. You have installed the Cluster Observability Operator. You have deployed the prometheus-coo-example-app sample service in the ns1-coo namespace. Note The prometheus-coo-example-app sample service does not support TLS authentication. Procedure Create a YAML file named example-coo-app-service-monitor.yaml that contains the following ServiceMonitor object configuration details: apiVersion: monitoring.rhobs/v1alpha1 kind: ServiceMonitor metadata: labels: k8s-app: prometheus-coo-example-monitor name: prometheus-coo-example-monitor namespace: ns1-coo spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-coo-example-app This configuration defines a ServiceMonitor object that the MonitoringStack object will reference to scrape the metrics data exposed by the prometheus-coo-example-app sample service. Apply the configuration to the cluster by running the following command: USD oc apply -f example-app-service-monitor.yaml Verify that the ServiceMonitor resource is created by running the following command and observing the output: USD oc -n ns1-coo get servicemonitor Example output NAME AGE prometheus-coo-example-monitor 81m 16.4.3. Creating a MonitoringStack object for the Cluster Observability Operator To scrape the metrics data exposed by the target prometheus-coo-example-app service, create a MonitoringStack object that references the ServiceMonitor object you created in the "Specifying how a service is monitored for Cluster Observability Operator" section. This MonitoringStack object can then discover the service and scrape the exposed metrics data from it. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with administrative permissions for the namespace. You have installed the Cluster Observability Operator. You have deployed the prometheus-coo-example-app sample service in the ns1-coo namespace. You have created a ServiceMonitor object named prometheus-coo-example-monitor in the ns1-coo namespace. Procedure Create a YAML file for the MonitoringStack object configuration. For this example, name the file example-coo-monitoring-stack.yaml . Add the following MonitoringStack object configuration details: Example MonitoringStack object apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: name: example-coo-monitoring-stack namespace: ns1-coo spec: logLevel: debug retention: 1d resourceSelector: matchLabels: k8s-app: prometheus-coo-example-monitor Apply the MonitoringStack object by running the following command: USD oc apply -f example-coo-monitoring-stack.yaml Verify that the MonitoringStack object is available by running the following command and inspecting the output: USD oc -n ns1-coo get monitoringstack Example output NAME AGE example-coo-monitoring-stack 81m
|
[
"apiVersion: v1 kind: Namespace metadata: name: ns1-coo --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-coo-example-app name: prometheus-coo-example-app namespace: ns1-coo spec: replicas: 1 selector: matchLabels: app: prometheus-coo-example-app template: metadata: labels: app: prometheus-coo-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.2 imagePullPolicy: IfNotPresent name: prometheus-coo-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-coo-example-app name: prometheus-coo-example-app namespace: ns1-coo spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-coo-example-app type: ClusterIP",
"oc apply -f prometheus-coo-example-app.yaml",
"oc -n -ns1-coo get pod",
"NAME READY STATUS RESTARTS AGE prometheus-coo-example-app-0927545cb7-anskj 1/1 Running 0 81m",
"apiVersion: monitoring.rhobs/v1alpha1 kind: ServiceMonitor metadata: labels: k8s-app: prometheus-coo-example-monitor name: prometheus-coo-example-monitor namespace: ns1-coo spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-coo-example-app",
"oc apply -f example-app-service-monitor.yaml",
"oc -n ns1-coo get servicemonitor",
"NAME AGE prometheus-coo-example-monitor 81m",
"apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: name: example-coo-monitoring-stack namespace: ns1-coo spec: logLevel: debug retention: 1d resourceSelector: matchLabels: k8s-app: prometheus-coo-example-monitor",
"oc apply -f example-coo-monitoring-stack.yaml",
"oc -n ns1-coo get monitoringstack",
"NAME AGE example-coo-monitoring-stack 81m"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/monitoring/cluster-observability-operator
|
Chapter 3. Building blocks active-passive deployments
|
Chapter 3. Building blocks active-passive deployments The following building blocks are needed to set up an active-passive deployment with synchronous replication. The building blocks link to a blueprint with an example configuration. They are listed in the order in which they need to be installed. Note We provide these blueprints to show a minimal functionally complete example with a good baseline performance for regular installations. You would still need to adapt it to your environment and your organization's standards and security best practices. 3.1. Prerequisites Understanding the concepts laid out in the Concepts for active-passive deployments chapter. 3.2. Two sites with low-latency connection Ensures that synchronous replication is available for both the database and the external Data Grid. Suggested setup: Two AWS Availablity Zones within the same AWS Region. Not considered: Two regions on the same or different continents, as it would increase the latency and the likelihood of network failures. Synchronous replication of databases as a services with Aurora Regional Deployments on AWS is only available within the same region. 3.3. Environment for Red Hat build of Keycloak and Data Grid Ensures that the instances are deployed and restarted as needed. Suggested setup: Red Hat OpenShift Service on AWS (ROSA) deployed in each availability zone. Not considered: A stretched ROSA cluster which spans multiple availability zones, as this could be a single point of failure if misconfigured. 3.4. Database A synchronously replicated database across two sites. Blueprint: Deploy AWS Aurora in multiple availability zones . 3.5. Data Grid A deployment of Data Grid that leverages the Data Grid's Cross-DC functionality. Blueprint: Deploy Data Grid for HA with the Data Grid Operator using the Data Grid Operator, and connect the two sites using Data Grid's Gossip Router. Not considered: Direct interconnections between the Kubernetes clusters on the network layer. It might be considered in the future. 3.6. Red Hat build of Keycloak A clustered deployment of Red Hat build of Keycloak in each site, connected to an external Data Grid. Blueprint: Deploy Red Hat build of Keycloak for HA with the Red Hat build of Keycloak Operator together with Connect Red Hat build of Keycloak with an external Data Grid and the Aurora database. 3.7. Load balancer A load balancer which checks the /lb-check URL of the Red Hat build of Keycloak deployment in each site. Blueprint: Deploy an AWS Route 53 loadbalancer . Not considered: AWS Global Accelerator as it supports only weighted traffic routing and not active-passive failover. To support active-passive failover, additional logic using, for example, AWS CloudWatch and AWS Lambda would be necessary to simulate the active-passive handling by adjusting the weights when the probes fail.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/high_availability_guide/bblocks-active-passive-sync-
|
Chapter 10. Running certification tests by using CLI and downloading the results file
|
Chapter 10. Running certification tests by using CLI and downloading the results file To run the certification tests by using CLI you must prepare the host and download the test plan to the SUT. After running the tests, download the results and review them. 10.1. Using the test plan to prepare the host under test for testing Running the provision command performs a number of operations, such as setting up passwordless SSH communication with the test server, installing the required packages on your system based on the certification type, and creating a final test plan to run, which is a list of common tests taken from both the test plan provided by Red Hat and tests generated on discovering the system requirements. For instance, required hardware or software packages will be installed if the test plan is designed for certifying a hardware or a software product. Prerequisites You have the hostname or the IP address of the test server. Procedure Run the provision command in either way. The test plan will automatically get downloaded to your system. If you have already downloaded the test plan: Replace <path_to_test_plan_document> with the test plan file saved on your system. Follow the on-screen instructions. If you have not downloaded the test plan: Follow the on-screen instructions and enter your Certification ID when prompted. When prompted, provide the hostname or the IP address of the test server to set up passwordless SSH. You are prompted only the first time you add a new system. 10.2. Running the certification tests using CLI Procedure Run the following command: When prompted, choose whether to run each test by typing yes or no . You can also run particular tests from the list by typing select . Note After a test reboot, rhcert is running in the background to verify the image. Use tail -f / var /log/rhcert/RedHatCertDaemon.log to see the current progress and status of the verification. 10.3. Reviewing and downloading the results file of the executed test plan Procedure Download the test results file: Download the results file by using the rhcert-save command to your local system. Additional resources For more details on setting up and using cockpit for running the certification tests, see the Appendix .
|
[
"rhcert-provision <path_to_test_plan_document>",
"rhcert-provision",
"rhcert-run",
"rhcert-save"
] |
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_software_certification_workflow_guide/assembly_running-certification-tests-by-using-cli-and-downloading-the-results-file_openshift-sw-cert-workflow-setting-up-the-test-environment-for-non-containerized-application-testing
|
Getting Started
|
Getting Started Red Hat Enterprise Linux AI 1.1 Introduction to RHEL AI with product architecture and hardware requirements Red Hat RHEL AI Documentation Team
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.1/html/getting_started/index
|
Chapter 3. Managing secured clusters
|
Chapter 3. Managing secured clusters To secure a Kubernetes or an OpenShift Container Platform cluster, you must deploy Red Hat Advanced Cluster Security for Kubernetes (RHACS) services into the cluster. You can generate deployment files in the RHACS portal by navigating to the Platform Configuration Clusters view, or you can use the roxctl CLI. 3.1. Prerequisites You have configured the ROX_ENDPOINT environment variable using the following command: USD export ROX_ENDPOINT= <host:port> 1 1 The host and port information that you want to store in the ROX_ENDPOINT environment variable. 3.2. Generating Sensor deployment files Generating files for Kubernetes systems Procedure Generate the required sensor configuration for your Kubernetes cluster and associate it with your Central instance by running the following command: USD roxctl sensor generate k8s --name <cluster_name> --central "USDROX_ENDPOINT" Generating files for OpenShift Container Platform systems Procedure Generate the required sensor configuration for your OpenShift Container Platform cluster and associate it with your Central instance by running the following command: USD roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central "USDROX_ENDPOINT" 1 1 For the --openshift-version option, specify the major OpenShift Container Platform version number for your cluster. For example, specify 3 for OpenShift Container Platform version 3.x and specify 4 for OpenShift Container Platform version 4.x . Read the --help output to see other options that you might need to use depending on your system architecture. Verify that the endpoint you provide for --central can be reached from the cluster where you are deploying Red Hat Advanced Cluster Security for Kubernetes services. Important If you are using a non-gRPC capable load balancer, such as HAProxy, AWS Application Load Balancer (ALB), or AWS Elastic Load Balancing (ELB), follow these guidelines: Use the WebSocket Secure ( wss ) protocol. To use wss , prefix the address with wss:// , and Add the port number after the address, for example: USD roxctl sensor generate k8s --central wss://stackrox-central.example.com:443 3.3. Installing Sensor by using the sensor.sh script When you generate the Sensor deployment files, roxctl creates a directory called sensor-<cluster_name> in your working directory. The script to install Sensor is located in this directory. Procedure Run the sensor installation script to install Sensor: USD ./sensor- <cluster_name> /sensor.sh If you get a warning that you do not have the required permissions to install Sensor, follow the on-screen instructions, or contact your cluster administrator for help. 3.4. Downloading Sensor bundles for existing clusters Procedure Run the following command to download Sensor bundles for existing clusters by specifying a cluster name or ID : USD roxctl sensor get-bundle <cluster_name_or_id> 3.5. Deleting cluster integration Procedure Before deleting the cluster, ensure you have the correct cluster name that you want to remove from Central: USD roxctl cluster delete --name= <cluster_name> Important Deleting the cluster integration does not remove the RHACS services running in the cluster, depending on the installation method. You can remove the services by running the delete-sensor.sh script from the Sensor installation bundle.
|
[
"export ROX_ENDPOINT= <host:port> 1",
"roxctl sensor generate k8s --name <cluster_name> --central \"USDROX_ENDPOINT\"",
"roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central \"USDROX_ENDPOINT\" 1",
"roxctl sensor generate k8s --central wss://stackrox-central.example.com:443",
"./sensor- <cluster_name> /sensor.sh",
"roxctl sensor get-bundle <cluster_name_or_id>",
"roxctl cluster delete --name= <cluster_name>"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/roxctl_cli/managing-secured-clusters-1
|
Observability overview
|
Observability overview OpenShift Container Platform 4.14 Contains information about CI/CD for OpenShift Container Platform Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/observability_overview/index
|
Installing OpenShift Serverless
|
Installing OpenShift Serverless Red Hat OpenShift Serverless 1.35 Installing the Serverless Operator, Knative CLI, Knative Serving, and Knative Eventing Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/installing_openshift_serverless/index
|
Storage
|
Storage OpenShift Container Platform 4.11 Configuring and managing storage in OpenShift Container Platform Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/storage/index
|
Chapter 1. Overview
|
Chapter 1. Overview Red Hat(R) Enterprise Linux(R) for SAP Solutions combines the reliability, scalability, and performance of Linux with technologies that meet the specific requirements of SAP workloads. It is certified for integration with SAP S/4HANA(R) and built on the same foundation as the world's leading enterprise Linux platform, Red Hat Enterprise Linux (RHEL). For more information on RHEL for SAP Solutions, see the Red Hat Enterprise Linux for SAP Solutions product page.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/8.x_release_notes/con_overview_8.x_release_notes
|
12.3. Configuring the Log Format
|
12.3. Configuring the Log Format You can configure the Red Hat Gluster Storage Server to generate log messages either with message IDs or without them. To know more about these options, see topic Configuring Volume Options in the Red Hat Gluster Storage Administration Guide . To configure the log-format for bricks of a volume: Example 12.1. Generate log files with with-msg-id : Example 12.2. Generate log files with no-msg-id : To configure the log-format for clients of a volume: Example 12.3. Generate log files with with-msg-id : Example 12.4. Generate log files with no-msg-id : To configure the log format for glusterd : Example 12.5. Generate log files with with-msg-id : Example 12.6. Generate log files with no-msg-id : See Also: Section 11.1, "Configuring Volume Options"
|
[
"gluster volume set VOLNAME diagnostics.brick-log-format <value>",
"gluster volume set testvol diagnostics.brick-log-format with-msg-id",
"gluster volume set testvol diagnostics.brick-log-format no-msg-id",
"gluster volume set VOLNAME diagnostics.client-log-format <value>",
"gluster volume set testvol diagnostics.client-log-format with-msg-id",
"gluster volume set testvol diagnostics.client-log-format no-msg-id",
"glusterd --log-format=<value>",
"glusterd --log-format=with-msg-id",
"glusterd --log-format=no-msg-id"
] |
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/configuring_the_log_format
|
Chapter 4. Configuring the Bare Metal Provisioning service after deployment
|
Chapter 4. Configuring the Bare Metal Provisioning service after deployment When you have deployed your overcloud with the Bare Metal Provisioning service (ironic), you must prepare your overcloud for bare-metal workloads. To prepare your overcloud for bare-metal workloads and enable your cloud users to create bare-metal instances, complete the following tasks: Configure the Networking service (neutron) to integrate with the Bare Metal Provisioning service. Configure node cleaning. Create the bare-metal flavor and resource class. Optional: Create the bare-metal images. Add physical machines as bare-metal nodes. Optional: Configure Redfish virtual media boot. Optional: Create host aggregates to separate physical and virtual machine provisioning. 4.1. Configuring the Networking service for bare metal provisioning You can configure the Networking service (neutron) to integrate with the Bare Metal Provisioning service (ironic). You can configure the bare-metal network by using one of the following methods: Create a single flat bare-metal network for the Bare Metal Provisioning conductor services, ironic-conductor . This network must route to the Bare Metal Provisioning services on the control plane network. Create a custom composable network to implement Bare Metal Provisioning services in the overcloud. 4.1.1. Configuring the Networking service to integrate with the Bare Metal Provisioning service on a flat network You can configure the Networking service (neutron) to integrate with the Bare Metal Provisioning service (ironic) by creating a single flat bare-metal network for the Bare Metal Provisioning conductor services, ironic-conductor . This network must route to the Bare Metal Provisioning services on the control plane network. Procedure Log in to the node that hosts the Networking service (neutron) as the root user. Source your overcloud credentials file: Replace <credentials_file> with the name of your credentials file, for example, overcloudrc . Create the flat network over which to provision bare-metal instances: Replace <provider_physical_network> with the name of the physical network over which you implement the virtual network, which is configured with the parameter NeutronBridgeMappings in your network-environment.yaml file. Replace <network_name> with a name for this network. Create the subnet on the flat network: Replace <network_name> with the name of the provisioning network that you created in the step. Replace <network_cidr> with the Classless Inter-Domain Routing (CIDR) representation of the block of IP addresses that the subnet represents. The block of IP addresses that you specify in the range starting with <start_ip> and ending with <end_ip> must be within the block of IP addresses specified by <network_cidr> . Replace <gateway_ip> with the IP address or host name of the router interface that acts as the gateway for the new subnet. This address must be within the block of IP addresses specified by <network_cidr> , but outside of the block of IP addresses specified by the range that starts with <start_ip> and ends with <end_ip> . Replace <start_ip> with the IP address that denotes the start of the range of IP addresses within the new subnet from which floating IP addresses are allocated. Replace <end_ip> with the IP address that denotes the end of the range of IP addresses within the new subnet from which floating IP addresses are allocated. Replace <subnet_name> with a name for the subnet. Create a router for the network and subnet to ensure that the Networking service serves metadata requests: Replace <router_name> with a name for the router. Attach the subnet to the new router to enable the metadata requests from cloud-init to be served and the node to be configured: : Replace <router_name> with the name of your router. Replace <subnet> with the ID or name of the bare-metal subnet that you created in the step 4. 4.1.2. Configuring the Networking service to integrate with the Bare Metal Provisioning service on a custom composable network You can configure the Networking service (neutron) to integrate with the Bare Metal Provisioning service (ironic) by creating a custom composable network to implement Bare Metal Provisioning services in the overcloud. Procedure Log in to the undercloud host. Source your overcloud credentials file: Replace <credentials_file> with the name of your credentials file, for example, overcloudrc . Retrieve the UUID for the provider network that hosts the Bare Metal Provisioning service: Replace <network_name> with the name of the provider network that you want to use for the bare-metal instance provisioning network. Open your local environment file that configures the Bare Metal Provisioning service for your deployment, for example, ironic-overrides.yaml . Configure the network to use as the bare-metal instance provisioning network: Replace <network_uuid> with the UUID of the provider network retrieved in step 3. Source the stackrc undercloud credentials file: To apply the bare-metal instance provisioning network configuration, add your Bare Metal Provisioning environment files to the stack with your other environment files and deploy the overcloud: Replace <default_ironic_template> with either ironic.yaml or ironic-overcloud.yaml , depending on the Networking service mechanism driver for your deployment. 4.2. Cleaning bare-metal nodes The Bare Metal Provisioning service cleans nodes to prepare them for provisioning. You can clean bare-metal nodes by using one of the following methods: Automatic: You can configure your overcloud to automatically perform node cleaning when you unprovision a node. Manual: You can manually clean individual nodes when required. 4.2.1. Configuring automatic node cleaning Automatic bare-metal node cleaning runs after you enroll a node, and before the node reaches the available provisioning state. Automatic cleaning is run each time the node is unprovisioned. By default, the Bare Metal Provisioning service uses a network named provisioning for node cleaning. However, network names are not unique in the Networking service (neutron), so it is possible for a project to create a network with the same name, which causes a conflict with the Bare Metal Provisioning service. To avoid the conflict, use the network UUID to configure the node cleaning network. Procedure Log in to the undercloud host. Source your overcloud credentials file: Replace <credentials_file> with the name of your credentials file, for example, overcloudrc . Retrieve the UUID for the provider network that hosts the Bare Metal Provisioning service: Replace <network_name> with the name of the network that you want to use for the bare-metal node cleaning network. Open your local environment file that configures the Bare Metal Provisioning service for your deployment, for example, ironic-overrides.yaml . Configure the network to use as the node cleaning network: Replace <network_uuid> with the UUID of the provider network that you retrieved in step 3. Source the stackrc undercloud credentials file: To apply the node cleaning network configuration, add your Bare Metal Provisioning environment files to the stack with your other environment files and deploy the overcloud: Replace <default_ironic_template> with either ironic.yaml or ironic-overcloud.yaml , depending on the Networking service mechanism driver for your deployment. 4.2.2. Cleaning nodes manually You can clean specific nodes manually as required. Node cleaning has two modes: Metadata only clean: Removes partitions from all disks on the node. The metadata only mode of cleaning is faster than a full clean, but less secure because it erases only partition tables. Use this mode only on trusted tenant environments. Full clean: Removes all data from all disks, using either ATA secure erase or by shredding. A full clean can take several hours to complete. Procedure Source your overcloud credentials file: Replace <credentials_file> with the name of your credentials file, for example, overcloudrc . Check the current state of the node: Replace <node> with the name or UUID of the node to clean. If the node is not in the manageable state, then set it to manageable : Clean the node: Replace <node> with the name or UUID of the node to clean. Replace <clean_mode> with the type of cleaning to perform on the node: erase_devices : Performs a full clean. erase_devices_metadata : Performs a metadata only clean. Wait for the clean to complete, then check the status of the node: manageable : The clean was successful, and the node is ready to provision. clean failed : The clean was unsuccessful. Inspect the last_error field for the cause of failure. 4.3. Creating flavors for launching bare-metal instances You must create flavors that your cloud users can use to request bare-metal instances. You can specify which bare-metal nodes should be used for bare-metal instances launched with a particular flavor by using a resource class. You can tag bare-metal nodes with resource classes that identify the hardware resources on the node, for example, GPUs. The cloud user can select a flavor with the GPU resource class to create an instance for a vGPU workload. The Compute scheduler uses the resource class to identify suitable host bare-metal nodes for instances. Procedure Source the overcloud credentials file: Create a flavor for bare-metal instances: Replace <ram_size_mb> with the RAM of the bare metal node, in MB. Replace <disk_size_gb> with the size of the disk on the bare metal node, in GB. Replace <no_vcpus> with the number of CPUs on the bare metal node. Note These properties are not used for scheduling instances. However, the Compute scheduler does use the disk size to determine the root partition size. Retrieve a list of your nodes to identify their UUIDs: Tag each bare-metal node with a custom bare-metal resource class: Replace <CUSTOM> with a string that identifies the purpose of the resource class. For example, set to GPU to create a custom GPU resource class that you can use to tag bare metal nodes that you want to designate for GPU workloads. Replace <node> with the ID of the bare metal node. Associate the flavor for bare-metal instances with the custom resource class: To determine the name of a custom resource class that corresponds to a resource class of a bare-metal node, convert the resource class to uppercase, replace each punctuation mark with an underscore, and prefix with CUSTOM_ . Note A flavor can request only one instance of a bare-metal resource class. Set the following flavor properties to prevent the Compute scheduler from using the bare-metal flavor properties to schedule instances: Verify that the new flavor has the correct values: 4.4. Creating images for launching bare-metal instances An overcloud that includes the Bare Metal Provisioning service (ironic) requires two sets of images: Deploy images: The deploy images are the agent.ramdisk and agent.kernel images that the Bare Metal Provisioning agent ( ironic-python-agent ) requires to boot the RAM disk over the network and copy the user image for the overcloud nodes to the disk. You install the deploy images as part of the undercloud installation. For more information, see Obtaining images for overcloud nodes . User images: The images the cloud user uses to provision their bare-metal instances. The user image consists of a kernel image, a ramdisk image, and a main image. The main image is either a root partition, or a whole-disk image: Whole-disk image: An image that contains the partition table and boot loader. Root partition image: Contains only the root partition of the operating system. Compatible whole-disk RHEL guest images should work without modification. To create your own custom disk image, see Creating images in the Creating and Managing Images guide. 4.4.1. Uploading the deploy images to the Image service You must upload the deploy images installed by director to the Image service. The deploy image consists of the following two images: The kernel image: /tftpboot/agent.kernel The ramdisk image: /tftpboot/agent.ramdisk These images are installed in the home directory. For more information on how the deploy images were installed, see Obtaining images for overcloud nodes . Procedure Extract the images and upload them to the Image service: 4.5. Configuring deploy interfaces When you provision bare metal nodes, the Bare Metal Provisioning service (ironic) on the overcloud writes a base operating system image to the disk on the bare metal node. By default, the deploy interface mounts the image on an iSCSI mount and then copies the image to disk on each node. Alternatively, you can use direct deploy, which writes disk images from a HTTP location directly to disk on bare metal nodes. Note Support for the iSCSI deploy interface will be deprecated in Red Hat OpenStack Platform (RHOSP) version 17.0, and will be removed in RHOSP 18.0. Direct deploy will be the default deploy interface from RHOSP 17.0. Deploy interfaces have a critical role in the provisioning process. Deploy interfaces orchestrate the deployment and define the mechanism for transferring the image to the target disk. Prerequisites Dependent packages configured on the bare metal service nodes that run ironic-conductor . Configure OpenStack Compute (nova) to use the bare metal service endpoint. Create flavors for the available hardware, and nova must boot the new node from the correct flavor. Images must be available in the Image service (glance): bm-deploy-kernel bm-deploy-ramdisk user-image user-image-vmlinuz user-image-initrd Hardware to enroll with the Ironic API service. Workflow Use the following example workflow to understand the standard deploy process. Depending on the ironic driver interfaces that you use, some of the steps might differ: The Nova scheduler receives a boot instance request from the Nova API. The Nova scheduler identifies the relevant hypervisor and identifies the target physical node. The Nova compute manager claims the resources of the selected hypervisor. The Nova compute manager creates unbound tenant virtual interfaces (VIFs) in the Networking service according to the network interfaces that the nova boot request specifies. Nova compute invokes driver.spawn from the Nova compute virt layer to create a spawn task that contains all of the necessary information. During the spawn process, the virt driver completes the following steps. Updates the target ironic node with information about the deploy image, instance UUID, requested capabilities, and flavor properties. Calls the ironic API to validate the power and deploy interfaces of the target node. Attaches the VIFs to the node. Each neutron port can be attached to any ironic port or group. Port groups have higher priority than ports. Generates config drive. The Nova ironic virt driver issues a deploy request with the Ironic API to the Ironic conductor that services the bare metal node. Virtual interfaces are plugged in and the Neutron API updates DHCP to configure PXE/TFTP options. The ironic node boot interface prepares (i)PXE configuration and caches the deploy kernel and ramdisk. The ironic node management interface issues commands to enable network boot of the node. The ironic node deploy interface caches the instance image, kernel, and ramdisk, if necessary. The ironic node power interface instructs the node to power on. The node boots the deploy ramdisk. With iSCSI deployment, the conductor copies the image over iSCSI to the physical node. With direct deployment, the deploy ramdisk downloads the image from a temporary URL. This URL must be a Swift API compatible object store or a HTTP URL. The node boot interface switches PXE configuration to refer to instance images and instructs the ramdisk agent to soft power off the node. If the soft power off fails, the bare metal node is powered off with IPMI/BMC. The deploy interface instructs the network interface to remove any provisioning ports, binds the tenant ports to the node, and powers the node on. The provisioning state of the new bare metal node is now active . 4.5.1. Configuring the direct deploy interface on the overcloud The iSCSI deploy interface is the default deploy interface. However, you can enable the direct deploy interface to download an image from a HTTP location to the target disk. Note Support for the iSCSI deploy interface will be deprecated in Red Hat OpenStack Platform (RHOSP) version 17.0, and will be removed in RHOSP 18.0. Direct deploy will be the default deploy interface from RHOSP 17.0. Prerequisites Your overcloud node memory tmpfs must have at least 8GB of RAM. Procedure Create or modify a custom environment file /home/stack/templates/direct_deploy.yaml and specify the IronicEnabledDeployInterfaces and the IronicDefaultDeployInterface parameters. If you register your nodes with iSCSI, retain the iscsi value in the IronicEnabledDeployInterfaces parameter: By default, the Bare Metal Provisioning service (ironic) agent on each node obtains the image stored in the Object Storage Service (swift) through a HTTP link. Alternatively, ironic can stream this image directly to the node through the ironic-conductor HTTP server. To change the service that provides the image, set the IronicImageDownloadSource to http in the /home/stack/templates/direct_deploy.yaml file: Include the custom environment with your overcloud deployment: Wait until deployment completes. Note If you did not specify IronicDefaultDeployInterface or want to use a different deploy interface, specify the deploy interface when you create or update a node: 4.6. Adding physical machines as bare metal nodes Use one of the following methods to enroll a bare metal node: Prepare an inventory file with the node details, import the file into the Bare Metal Provisioning service, and make the nodes available. Register a physical machine as a bare metal node, and then manually add its hardware details and create ports for each of its Ethernet MAC addresses. You can perform these steps on any node that has your overcloudrc file. 4.6.1. Enrolling a bare metal node with an inventory file Prepare an inventory file with the node details, import the file into the Bare Metal Provisioning service (ironic), and make the nodes available. Prerequisites An overcloud deployment that includes the Bare Metal Provisioning service. For more information, see Deploying an overcloud with the Bare Metal Provisioning service . Procedure Create an inventory file, overcloud-nodes.yaml , that includes the node details. You can enroll multiple nodes with one file. Replace <ipmi_ip> with the address of the Bare Metal controller. Replace <user> with your username. Replace <password> with your password. Optional: Replace <property>: <value> with an IPMI property that you want to configure, and the property value. For information on the available properties, see Intelligent Platform Management Interface (IPMI) power management driver . Replace <cpu_count> with the number of CPUs. Replace <cpu_arch> with the type of architecture of the CPUs. Replace <memory> with the amount of memory in MiB. Replace <root_disk> with the size of the root disk in GiB. Only required when the machine has multiple disks. Replace <serial> with the serial number of the disk that you want to use for deployment. Replace <mac_address> with the MAC address of the NIC used to PXE boot. --driver-info <property>=<value> Source the overcloudrc file: Import the inventory file into the Bare Metal Provisioning service: The nodes are now in the enroll state. Specify the deploy kernel and deploy ramdisk on each node: Replace <node> with the name or ID of the node. Replace <kernel_file> with the path to the .kernel image, for example, file:///var/lib/ironic/httpboot/agent.kernel . Replace <initramfs_file> with the path to the .initramfs image, for example, file:///var/lib/ironic/httpboot/agent.ramdisk . Optional: Specify the IPMI cipher suite for each node: Replace <node> with the name or ID of the node. Replace <version> with the cipher suite version to use on the node. Set to one of the following valid values: 3 - The node uses the AES-128 with SHA1 cipher suite. 17 - The node uses the AES-128 with SHA256 cipher suite. Set the provisioning state of the node to available : The Bare Metal Provisioning service cleans the node if you enabled node cleaning. Set the local boot option on the node: Check that the nodes are enrolled: There might be a delay between enrolling a node and its state being shown. 4.6.2. Enrolling a bare-metal node manually Register a physical machine as a bare metal node, then manually add its hardware details and create ports for each of its Ethernet MAC addresses. You can perform these steps on any node that has your overcloudrc file. Prerequisites An overcloud deployment that includes the Bare Metal Provisioning service. For more information, see Deploying an overcloud with the Bare Metal Provisioning service . The driver for the new node must be enabled by using the IronicEnabledHardwareTypes parameter. For more information about supported drivers, see Bare metal drivers . Procedure Log in to the undercloud host as the stack user. Source the overcloud credentials file: Add a new node: Replace <driver_name> with the name of the driver, for example, ipmi . Replace <node_name> with the name of your new bare-metal node. Note the UUID assigned to the node when it is created. Set the boot option to local for each registered node: Replace <node> with the UUID of the bare metal node. Specify the deploy kernel and deploy ramdisk for the node driver: Replace <node> with the ID of the bare metal node. Replace <kernel_file> with the path to the .kernel image, for example, file:///var/lib/ironic/httpboot/agent.kernel . Replace <initramfs_file> with the path to the .initramfs image, for example, file:///var/lib/ironic/httpboot/agent.ramdisk . Update the node properties to match the hardware specifications on the node: Replace <node> with the ID of the bare metal node. Replace <cpu> with the number of CPUs. Replace <ram> with the RAM in MB. Replace <disk> with the disk size in GB. Replace <arch> with the architecture type. Optional: Specify the IPMI cipher suite for each node: Replace <node> with the ID of the bare metal node. Replace <version> with the cipher suite version to use on the node. Set to one of the following valid values: 3 - The node uses the AES-128 with SHA1 cipher suite. 17 - The node uses the AES-128 with SHA256 cipher suite. Optional: Specify the IPMI details for each node: Replace <node> with the ID of the bare metal node. Replace <property> with the IPMI property that you want to configure. For information on the available properties, see Intelligent Platform Management Interface (IPMI) power management driver . Replace <value> with the property value. Optional: If you have multiple disks, set the root device hints to inform the deploy ramdisk which disk to use for deployment: Replace <node> with the ID of the bare metal node. Replace <property> and <value> with details about the disk that you want to use for deployment, for example root_device='{"size": "128"}' RHOSP supports the following properties: model (String): Device identifier. vendor (String): Device vendor. serial (String): Disk serial number. hctl (String): Host:Channel:Target:Lun for SCSI. size (Integer): Size of the device in GB. wwn (String): Unique storage identifier. wwn_with_extension (String): Unique storage identifier with the vendor extension appended. wwn_vendor_extension (String): Unique vendor storage identifier. rotational (Boolean): True for a rotational device (HDD), otherwise false (SSD). name (String): The name of the device, for example: /dev/sdb1 Use this property only for devices with persistent names. Note If you specify more than one property, the device must match all of those properties. Inform the Bare Metal Provisioning service of the node network card by creating a port with the MAC address of the NIC on the provisioning network: Replace <node> with the unique ID of the bare metal node. Replace <mac_address> with the MAC address of the NIC used to PXE boot. Validate the configuration of the node: The validation output Result indicates the following: False : The interface has failed validation. If the reason provided includes missing the instance_info parameters [\'ramdisk', \'kernel', and \'image_source'] , this might be because the Compute service populates those missing parameters at the beginning of the deployment process, therefore they have not been set at this point. If you are using a whole disk image, then you might need to only set image_source to pass the validation. True : The interface has passed validation. None : The interface is not supported for your driver. 4.6.3. Bare-metal node provisioning states A bare-metal node transitions through several provisioning states during its lifetime. API requests and conductor events performed on the node initiate the transitions. There are two categories of provisioning states: "stable" and "in transition". Use the following table to understand the provisioning states a node can be in, and the actions that are available for you to use to transition the node from one provisioning state to another. Table 4.1. Provisioning states State Category Description enroll Stable The initial state of each node. For information on enrolling a node, see Adding physical machines as bare metal nodes . verifying In transition The Bare Metal Provisioning service validates that it can manage the node by using the driver_info configuration provided during the node enrollment. manageable Stable The node is transitioned to the manageable state when the Bare Metal Provisioning service has verified that it can manage the node. You can transition the node from the manageable state to one of the following states by using the following commands: openstack baremetal node adopt adopting active openstack baremetal node provide cleaning available openstack baremetal node clean cleaning available openstack baremetal node inspect inspecting manageable You must move a node to the manageable state after it is transitioned to one of the following failed states: adopt failed clean failed inspect failed Move a node into the manageable state when you need to update the node. inspecting In transition The Bare Metal Provisioning service uses node introspection to update the hardware-derived node properties to reflect the current state of the hardware. The node transitions to manageable for synchronous inspection, and inspect wait for asynchronous inspection. The node transitions to inspect failed if an error occurs. inspect wait In transition The provision state that indicates that an asynchronous inspection is in progress. If the node inspection is successful, the node transitions to the manageable state. inspect failed Stable The provisioning state that indicates that the node inspection failed. You can transition the node from the inspect failed state to one of the following states by using the following commands: openstack baremetal node inspect inspecting manageable openstack baremetal node manage manageable cleaning In transition Nodes in the cleaning state are being scrubbed and reprogrammed into a known configuration. When a node is in the cleaning state, depending on the network management, the conductor performs the following tasks: Out-of-band: The conductor performs the clean step. In-band: The conductor prepares the environment to boot the ramdisk for running the in-band clean steps. The preparation tasks include building the PXE configuration files, and configuring the DHCP. clean wait In transition Nodes in the clean wait state are being scrubbed and reprogrammed into a known configuration. This state is similar to the cleaning state except that in the clean wait state, the conductor is waiting for the ramdisk to boot or the clean step to finish. You can interrupt the cleaning process of a node in the clean wait state by running openstack baremetal node abort . available Stable After nodes have been successfully preconfigured and cleaned, they are moved into the available state and are ready to be provisioned. You can transition the node from the available state to one of the following states by using the following commands: openstack baremetal node deploy deploying active openstack baremetal node manage manageable deploying In transition Nodes in the deploying state are being prepared for a workload, which involves performing the following tasks: Setting appropriate BIOS options for the node deployment. Partitioning drives and creating file systems. Creating any additional resources that may be required by additional subsystems, such as the node-specific network configuration, and a configuratin drive partition. wait call-back In transition Nodes in the wait call-back state are being prepared for a workload. This state is similar to the deploying state except that in the wait call-back state, the conductor is waiting for a task to complete before preparing the node. For example, the following tasks must be completed before the conductor can prepare the node: The ramdisk has booted. The bootloader is installed. The image is written to the disk. You can interrupt the deployment of a node in the wait call-back state by running openstack baremetal node delete or openstack baremetal node undeploy . deploy failed Stable The provisioning state that indicates that the node deployment failed. You can transition the node from the deploy failed state to one of the following states by using the following commands: openstack baremetal node deploy deploying active openstack baremetal node rebuild deploying active openstack baremetal node delete deleting cleaning clean wait cleaning available openstack baremetal node undeploy deleting cleaning clean wait cleaning available active Stable Nodes in the active state have a workload running on them. The Bare Metal Provisioning service may regularly collect out-of-band sensor information, including the power state. You can transition the node from the active state to one of the following states by using the following commands: openstack baremetal node delete deleting available openstack baremetal node undeploy cleaning available openstack baremetal node rebuild deploying active openstack baremetal node rescue rescuing rescue deleting In transition When a node is in the deleting state, the Bare Metal Provisioning service disassembles the active workload and removes any configuration and resources it added to the node during the node deployment or rescue. Nodes transition quickly from the deleting state to the cleaning state, and then to the clean wait state. error Stable If a node deletion is unsuccessful, the node is moved into the error state. You can transition the node from the error state to one of the following states by using the following commands: openstack baremetal node delete deleting available openstack baremetal node undeploy cleaning available adopting In transition You can use the openstack baremetal node adopt command to transition a node with an existing workload directly from manageable to active state without first cleaning and deploying the node. When a node is in the adopting state the Bare Metal Provisioning service has taken over management of the node with its existing workload. rescuing In transition Nodes in the rescuing state are being prepared to perform the following rescue operations: Setting appropriate BIOS options for the node deployment. Creating any additional resources that may be required by additional subsystems, such as node-specific network configurations. rescue wait In transition Nodes in the rescue wait state are being rescued. This state is similar to the rescuing state except that in the rescue wait state, the conductor is waiting for the ramdisk to boot, or to execute parts of the rescue which need to run in-band on the node, such as setting the password for user named rescue. You can interrupt the rescue operation of a node in the rescue wait state by running openstack baremetal node abort . rescue failed Stable The provisioning state that indicates that the node rescue failed. You can transition the node from the rescue failed state to one of the following states by using the following commands: openstack baremetal node rescue rescuing rescue openstack baremetal node unrescue unrescuing active openstack baremetal node delete deleting available rescue Stable Nodes in the rescue state are running a rescue ramdisk. The Bare Metal Provisioning service may regularly collect out-of-band sensor information, including the power state. You can transition the node from the rescue state to one of the following states by using the following commands: openstack baremetal node unrescue unrescuing active openstack baremetal node delete deleting available unrescuing In transition Nodes in the unrescuing state are being prepared to transition from the rescue state to the active state. unrescue failed Stable The provisioning state that indicates that the node unrescue operation failed. You can transition the node from the unrescue failed state to one of the following states by using the following commands: openstack baremetal node rescue rescuing rescue openstack baremetal node unrescue unrescuing active openstack baremetal node delete deleting available 4.7. Configuring Redfish virtual media boot Important This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . You can use Redfish virtual media boot to supply a boot image to the Baseboard Management Controller (BMC) of a node so that the BMC can insert the image into one of the virtual drives. The node can then boot from the virtual drive into the operating system that exists in the image. Redfish hardware types support booting deploy, rescue, and user images over virtual media. The Bare Metal Provisioning service (ironic) uses kernel and ramdisk images associated with a node to build bootable ISO images for UEFI or BIOS boot modes at the moment of node deployment. The major advantage of virtual media boot is that you can eliminate the TFTP image transfer phase of PXE and use HTTP GET, or other methods, instead. 4.7.1. Deploying a bare metal server with Redfish virtual media boot Important This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . To boot a node with the redfish hardware type over virtual media, set the boot interface to redfish-virtual-media and, for UEFI nodes, define the EFI System Partition (ESP) image. Then configure an enrolled node to use Redfish virtual media boot. Prerequisites Redfish driver enabled in the enabled_hardware_types parameter in the undercloud.conf file. A bare metal node registered and enrolled. IPA and instance images in the Image Service (glance). For UEFI nodes, you must also have an EFI system partition image (ESP) available in the Image Service (glance). A bare metal flavor. A network for cleaning and provisioning. Sushy library installed: Procedure Set the Bare Metal service (ironic) boot interface to redfish-virtual-media : Replace USDNODE_NAME with the name of the node. For UEFI nodes, set the boot mode to uefi : Replace USDNODE_NAME with the name of the node. Note For BIOS nodes, do not complete this step. For UEFI nodes, define the EFI System Partition (ESP) image: Replace USDESP with the glance image UUID or URL for the ESP image, and replace USDNODE_NAME with the name of the node. Note For BIOS nodes, do not complete this step. Create a port on the bare metal node and associate the port with the MAC address of the NIC on the bare metal node: Replace USDUUID with the UUID of the bare metal node, and replace USDMAC_ADDRESS with the MAC address of the NIC on the bare metal node. Create the new bare metal server: Replace USDIMAGE and USDNETWORK with the names of the image and network that you want to use. 4.8. Using host aggregates to separate physical and virtual machine provisioning OpenStack Compute uses host aggregates to partition availability zones, and group together nodes that have specific shared properties. When an instance is provisioned, the Compute scheduler compares properties on the flavor with the properties assigned to host aggregates, and ensures that the instance is provisioned in the correct aggregate and on the correct host: either on a physical machine or as a virtual machine. Complete the steps in this section to perform the following operations: Add the property baremetal to your flavors and set it to either true or false . Create separate host aggregates for bare metal hosts and compute nodes with a matching baremetal property. Nodes grouped into an aggregate inherit this property. Prerequisites A successful overcloud deployment that includes the Bare Metal Provisioning service. For more information, see Deploying an overcloud with the Bare Metal Provisioning service . Procedure Set the baremetal property to true on the baremetal flavor. Set the baremetal property to false on the flavors that virtual instances use: Create a host aggregate called baremetal-hosts : Add each Controller node to the baremetal-hosts aggregate: Note If you have created a composable role with the NovaIronic service, add all the nodes with this service to the baremetal-hosts aggregate. By default, only the Controller nodes have the NovaIronic service. Create a host aggregate called virtual-hosts : Add each Compute node to the virtual-hosts aggregate: If you did not add the following Compute filter scheduler when you deployed the overcloud, add it now to the existing list under scheduler_default_filters in the _/etc/nova/nova.conf_ file:
|
[
"source ~/<credentials_file>",
"openstack network create --provider-network-type flat --provider-physical-network <provider_physical_network> --share <network_name>",
"openstack subnet create --network <network_name> --subnet-range <network_cidr> --ip-version 4 --gateway <gateway_ip> --allocation-pool start=<start_ip>,end=<end_ip> --dhcp <subnet_name>",
"openstack router create <router_name>",
"openstack router add subnet <router_name> <subnet>",
"source ~/<credentials_file>",
"(overcloud)USD openstack network show <network_name> -f value -c id",
"parameter_defaults: IronicProvisioningNetwork: <network_uuid>",
"source ~/stackrc",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/node-info.yaml -r /home/stack/templates/roles_data.yaml -e /usr/share/openstack-tripleo-heat-templates/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/<default_ironic_template> -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml -e /home/stack/templates/network_environment_overrides.yaml -n /home/stack/templates/network_data.yaml -e /home/stack/templates/ironic-overrides.yaml",
"source ~/<credentials_file>",
"(overcloud)USD openstack network show <network_name> -f value -c id",
"parameter_defaults: IronicCleaningNetwork: <network_uuid>",
"source ~/stackrc",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/node-info.yaml -r /home/stack/templates/roles_data.yaml -e /usr/share/openstack-tripleo-heat-templates/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/<default_ironic_template> -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml -e /home/stack/templates/network_environment_overrides.yaml -n /home/stack/templates/network_data.yaml -e /home/stack/templates/ironic-overrides.yaml",
"source ~/<credentials_file>",
"openstack baremetal node show -f value -c provision_state <node>",
"openstack baremetal node manage <node>",
"openstack baremetal node clean <node> --clean-steps '[{\"interface\": \"deploy\", \"step\": \"<clean_mode>\"}]'",
"source ~/overcloudrc",
"(overcloud)USD openstack flavor create --id auto --ram <ram_size_mb> --disk <disk_size_gb> --vcpus <no_vcpus> baremetal",
"(overcloud)USD openstack baremetal node list",
"(overcloud)USD openstack baremetal node set --resource-class baremetal.<CUSTOM> <node>",
"(overcloud)USD openstack flavor set --property resources:CUSTOM_BAREMETAL_<CUSTOM>=1 baremetal",
"(overcloud)USD openstack flavor set --property resources:VCPU=0 --property resources:MEMORY_MB=0 --property resources:DISK_GB=0 baremetal",
"(overcloud)USD openstack flavor list",
"openstack image create --container-format aki --disk-format aki --public --file ./tftpboot/agent.kernel bm-deploy-kernel openstack image create --container-format ari --disk-format ari --public --file ./tftpboot/agent.ramdisk bm-deploy-ramdisk",
"parameter_defaults: IronicEnabledDeployInterfaces: direct IronicDefaultDeployInterface: direct",
"parameter_defaults: IronicEnabledDeployInterfaces: direct,iscsi IronicDefaultDeployInterface: direct",
"parameter_defaults: IronicEnabledDeployInterfaces: direct IronicDefaultDeployInterface: direct IronicImageDownloadSource: http",
"openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic.yaml -e /home/stack/templates/direct_deploy.yaml",
"openstack baremetal node create --driver ipmi --deploy-interface direct openstack baremetal node set <NODE> --deploy-interface direct",
"nodes: - name: node0 driver: ipmi driver_info: ipmi_address: <ipmi_ip> ipmi_username: <user> ipmi_password: <password> [<property>: <value>] properties: cpus: <cpu_count> cpu_arch: <cpu_arch> memory_mb: <memory> local_gb: <root_disk> root_device: serial: <serial> ports: - address: <mac_address>",
"source ~/overcloudrc",
"openstack baremetal create overcloud-nodes.yaml",
"openstack baremetal node set <node> --driver-info deploy_kernel=<kernel_file> --driver-info deploy_ramdisk=<initramfs_file>",
"openstack baremetal node set <node> --driver-info ipmi_cipher_suite=<version>",
"openstack baremetal node manage <node> openstack baremetal node provide <node>",
"openstack baremetal node set <node> --property capabilities=\"boot_option:local\"",
"openstack baremetal node list",
"(undercloud)USD source ~/overcloudrc",
"openstack baremetal node create --driver <driver_name> --name <node_name>",
"openstack baremetal node set --property capabilities=\"boot_option:local\" <node>",
"openstack baremetal node set <node> --driver-info deploy_kernel=<kernel_file> --driver-info deploy_ramdisk=<initramfs_file>",
"openstack baremetal node set <node> --property cpus=<cpu> --property memory_mb=<ram> --property local_gb=<disk> --property cpu_arch=<arch>",
"openstack baremetal node set <node> --driver-info ipmi_cipher_suite=<version>",
"openstack baremetal node set <node> --driver-info <property>=<value>",
"openstack baremetal node set <node> --property root_device='{\"<property>\": \"<value>\"}'",
"openstack baremetal port create --node <node_uuid> <mac_address>",
"openstack baremetal node validate <node> +------------+--------+---------------------------------------------+ | Interface | Result | Reason | +------------+--------+---------------------------------------------+ | boot | False | Cannot validate image information for node | | | | a02178db-1550-4244-a2b7-d7035c743a9b | | | | because one or more parameters are missing | | | | from its instance_info. Missing are: | | | | ['ramdisk', 'kernel', 'image_source'] | | console | None | not supported | | deploy | False | Cannot validate image information for node | | | | a02178db-1550-4244-a2b7-d7035c743a9b | | | | because one or more parameters are missing | | | | from its instance_info. Missing are: | | | | ['ramdisk', 'kernel', 'image_source'] | | inspect | None | not supported | | management | True | | | network | True | | | power | True | | | raid | True | | | storage | True | | +------------+--------+---------------------------------------------+",
"sudo yum install sushy",
"openstack baremetal node set --boot-interface redfish-virtual-media USDNODE_NAME",
"openstack baremetal node set --property capabilities=\"boot_mode:uefi\" USDNODE_NAME",
"openstack baremetal node set --driver-info bootloader=USDESP USDNODE_NAME",
"openstack baremetal port create --pxe-enabled True --node USDUUID USDMAC_ADDRESS",
"openstack server create --flavor baremetal --image USDIMAGE --network USDNETWORK test_instance",
"openstack flavor set baremetal --property baremetal=true",
"openstack flavor set FLAVOR_NAME --property baremetal=false",
"openstack aggregate create --property baremetal=true baremetal-hosts",
"openstack aggregate add host baremetal-hosts HOSTNAME",
"openstack aggregate create --property baremetal=false virtual-hosts",
"openstack aggregate add host virtual-hosts HOSTNAME",
"AggregateInstanceExtraSpecsFilter"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/bare_metal_provisioning/assembly_configuring-the-bare-metal-provisioning-service-after-deployment
|
Chapter 35. NamespaceService
|
Chapter 35. NamespaceService 35.1. GetNamespaces GET /v1/namespaces 35.1.1. Description 35.1.2. Parameters 35.1.2.1. Query Parameters Name Description Required Default Pattern query.query - null query.pagination.limit - null query.pagination.offset - null query.pagination.sortOption.field - null query.pagination.sortOption.reversed - null query.pagination.sortOption.aggregateBy.aggrFunc - UNSET query.pagination.sortOption.aggregateBy.distinct - null 35.1.3. Return Type V1GetNamespacesResponse 35.1.4. Content Type application/json 35.1.5. Responses Table 35.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetNamespacesResponse 0 An unexpected error response. GooglerpcStatus 35.1.6. Samples 35.1.7. Common object reference 35.1.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 35.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 35.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 35.1.7.3. StorageNamespaceMetadata Field Name Required Nullable Type Description Format id String name String clusterId String clusterName String labels Map of string creationTime Date date-time priority String int64 annotations Map of string 35.1.7.4. V1GetNamespacesResponse Field Name Required Nullable Type Description Format namespaces List of V1Namespace 35.1.7.5. V1Namespace Field Name Required Nullable Type Description Format metadata StorageNamespaceMetadata numDeployments Integer int32 numSecrets Integer int32 numNetworkPolicies Integer int32 35.2. GetNamespace GET /v1/namespaces/{id} 35.2.1. Description 35.2.2. Parameters 35.2.2.1. Path Parameters Name Description Required Default Pattern id X null 35.2.3. Return Type V1Namespace 35.2.4. Content Type application/json 35.2.5. Responses Table 35.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1Namespace 0 An unexpected error response. GooglerpcStatus 35.2.6. Samples 35.2.7. Common object reference 35.2.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 35.2.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 35.2.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 35.2.7.3. StorageNamespaceMetadata Field Name Required Nullable Type Description Format id String name String clusterId String clusterName String labels Map of string creationTime Date date-time priority String int64 annotations Map of string 35.2.7.4. V1Namespace Field Name Required Nullable Type Description Format metadata StorageNamespaceMetadata numDeployments Integer int32 numSecrets Integer int32 numNetworkPolicies Integer int32
|
[
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/namespaceservice
|
Chapter 5. Network connections
|
Chapter 5. Network connections 5.1. Connection Options This section describes how to configure how to configure connections. The ConnectionOptions object can be provided to a Client instance when creating a new connection and allows configuration of several different aspects of the resulting Connection instance. ConnectionOptions can be passed in the connect method on IClient and are used to configure Example: Configuring authentication final ConnectionOptions connectionOptions = new ConnectionOptions(); connectionOptions.user(System.getProperty("USER")); connectionOptions.password(System.getProperty("PASSWORD")); Connection connection = client.connect(serverHost, serverPort, connectionOptions); For a definitive list of options refer to ConnectionOptions 5.1.1. Connection Transport Options The ConnectionOptions object exposes a set of configuration options for the underlying I/O transport layer known as the TransportOptions which allows for fine grained configuration of network level options. Example: Configuring transport options final ConnectionOptions connectionOptions = new ConnectionOptions(); connectionOptions.transportOptions().tcpNoDelay(false); Connection connection = client.connect(serverHost, serverPort, connectionOptions); For a definitive list of options refer to TransportOptions 5.2. Reconnect and failover When creating a new connection it is possible to configure that connection to perform automatic connection recovery. Example: Configuring transport reconnection and failover final ConnectionOptions connectionOptions = new ConnectionOptions(); connectionOptions.reconnectOptions().reconnectEnabled(true); connectionOptions.reconnectOptions().reconnectDelay(30000); connectionOptions.reconnectOptions().addReconnectLocation(hostname, port); Connection connection = client.connect(serverHost, serverPort, connectionOptions); For a definitive list of options refer to ReconnectionOptions
|
[
"final ConnectionOptions connectionOptions = new ConnectionOptions(); connectionOptions.user(System.getProperty(\"USER\")); connectionOptions.password(System.getProperty(\"PASSWORD\")); Connection connection = client.connect(serverHost, serverPort, connectionOptions);",
"final ConnectionOptions connectionOptions = new ConnectionOptions(); connectionOptions.transportOptions().tcpNoDelay(false); Connection connection = client.connect(serverHost, serverPort, connectionOptions);",
"final ConnectionOptions connectionOptions = new ConnectionOptions(); connectionOptions.reconnectOptions().reconnectEnabled(true); connectionOptions.reconnectOptions().reconnectDelay(30000); connectionOptions.reconnectOptions().addReconnectLocation(hostname, port); Connection connection = client.connect(serverHost, serverPort, connectionOptions);"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_qpid_protonj2/1.0/html/using_qpid_protonj2/network_connections
|
11.3. Removing Swap Space
|
11.3. Removing Swap Space Sometimes it can be prudent to reduce swap space after installation. For example, say you downgraded the amount of RAM in your system from 1 GB to 512 MB, but there is 2 GB of swap space still assigned. It might be advantageous to reduce the amount of swap space to 1 GB, since the larger 2 GB could be wasting disk space. You have three options: remove an entire LVM2 logical volume used for swap, remove a swap file, or reduce swap space on an existing LVM2 logical volume. 11.3.1. Reducing Swap on an LVM2 Logical Volume To reduce an LVM2 swap logical volume (assuming /dev/VolGroup00/LogVol01 is the volume you want to extend): Disable swapping for the associated logical volume: Reduce the LVM2 logical volume by 512 MB: Format the new swap space: Enable the extended logical volume: Test that the logical volume has been reduced properly:
|
[
"swapoff -v /dev/VolGroup00/LogVol01",
"lvm lvreduce /dev/VolGroup00/LogVol01 -L -512M",
"mkswap /dev/VolGroup00/LogVol01",
"swapon -va",
"cat /proc/swaps # free"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/swap_space-removing_swap_space
|
Chapter 8. Clair in disconnected environments
|
Chapter 8. Clair in disconnected environments Note Currently, deploying Clair in disconnected environments is not supported on IBM Power and IBM Z. Clair uses a set of components called updaters to handle the fetching and parsing of data from various vulnerability databases. Updaters are set up by default to pull vulnerability data directly from the internet and work for immediate use. However, some users might require Red Hat Quay to run in a disconnected environment, or an environment without direct access to the internet. Clair supports disconnected environments by working with different types of update workflows that take network isolation into consideration. This works by using the clairctl command line interface tool, which obtains updater data from the internet by using an open host, securely transferring the data to an isolated host, and then important the updater data on the isolated host into Clair. Use this guide to deploy Clair in a disconnected environment. Note Currently, Clair enrichment data is CVSS data. Enrichment data is currently unsupported in disconnected environments. For more information about Clair updaters, see "Clair updaters". 8.1. Setting up Clair in a disconnected OpenShift Container Platform cluster Use the following procedures to set up an OpenShift Container Platform provisioned Clair pod in a disconnected OpenShift Container Platform cluster. 8.1.1. Installing the clairctl command line utility tool for OpenShift Container Platform deployments Use the following procedure to install the clairctl CLI tool for OpenShift Container Platform deployments. Procedure Install the clairctl program for a Clair deployment in an OpenShift Container Platform cluster by entering the following command: USD oc -n quay-enterprise exec example-registry-clair-app-64dd48f866-6ptgw -- cat /usr/bin/clairctl > clairctl Note Unofficially, the clairctl tool can be downloaded Set the permissions of the clairctl file so that it can be executed and run by the user, for example: USD chmod u+x ./clairctl 8.1.2. Retrieving and decoding the Clair configuration secret for Clair deployments on OpenShift Container Platform Use the following procedure to retrieve and decode the configuration secret for an OpenShift Container Platform provisioned Clair instance on OpenShift Container Platform. Prerequisites You have installed the clairctl command line utility tool. Procedure Enter the following command to retrieve and decode the configuration secret, and then save it to a Clair configuration YAML: USD oc get secret -n quay-enterprise example-registry-clair-config-secret -o "jsonpath={USD.data['config\.yaml']}" | base64 -d > clair-config.yaml Update the clair-config.yaml file so that the disable_updaters and airgap parameters are set to true , for example: --- indexer: airgap: true --- matcher: disable_updaters: true --- 8.1.3. Exporting the updaters bundle from a connected Clair instance Use the following procedure to export the updaters bundle from a Clair instance that has access to the internet. Prerequisites You have installed the clairctl command line utility tool. You have retrieved and decoded the Clair configuration secret, and saved it to a Clair config.yaml file. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. Procedure From a Clair instance that has access to the internet, use the clairctl CLI tool with your configuration file to export the updaters bundle. For example: USD ./clairctl --config ./config.yaml export-updaters updates.gz 8.1.4. Configuring access to the Clair database in the disconnected OpenShift Container Platform cluster Use the following procedure to configure access to the Clair database in your disconnected OpenShift Container Platform cluster. Prerequisites You have installed the clairctl command line utility tool. You have retrieved and decoded the Clair configuration secret, and saved it to a Clair config.yaml file. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. You have exported the updaters bundle from a Clair instance that has access to the internet. Procedure Determine your Clair database service by using the oc CLI tool, for example: USD oc get svc -n quay-enterprise Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h ... Forward the Clair database port so that it is accessible from the local machine. For example: USD oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432 Update your Clair config.yaml file, for example: indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json 1 Replace the value of the host in the multiple connstring fields with localhost . 2 For more information about the rhel-repository-scanner parameter, see "Mapping repositories to Common Product Enumeration information". 3 For more information about the rhel_containerscanner parameter, see "Mapping repositories to Common Product Enumeration information". 8.1.5. Importing the updaters bundle into the disconnected OpenShift Container Platform cluster Use the following procedure to import the updaters bundle into your disconnected OpenShift Container Platform cluster. Prerequisites You have installed the clairctl command line utility tool. You have retrieved and decoded the Clair configuration secret, and saved it to a Clair config.yaml file. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. You have exported the updaters bundle from a Clair instance that has access to the internet. You have transferred the updaters bundle into your disconnected environment. Procedure Use the clairctl CLI tool to import the updaters bundle into the Clair database that is deployed by OpenShift Container Platform. For example: USD ./clairctl --config ./clair-config.yaml import-updaters updates.gz 8.2. Setting up a self-managed deployment of Clair for a disconnected OpenShift Container Platform cluster Use the following procedures to set up a self-managed deployment of Clair for a disconnected OpenShift Container Platform cluster. 8.2.1. Installing the clairctl command line utility tool for a self-managed Clair deployment on OpenShift Container Platform Use the following procedure to install the clairctl CLI tool for self-managed Clair deployments on OpenShift Container Platform. Procedure Install the clairctl program for a self-managed Clair deployment by using the podman cp command, for example: USD sudo podman cp clairv4:/usr/bin/clairctl ./clairctl Set the permissions of the clairctl file so that it can be executed and run by the user, for example: USD chmod u+x ./clairctl 8.2.2. Deploying a self-managed Clair container for disconnected OpenShift Container Platform clusters Use the following procedure to deploy a self-managed Clair container for disconnected OpenShift Container Platform clusters. Prerequisites You have installed the clairctl command line utility tool. Procedure Create a folder for your Clair configuration file, for example: USD mkdir /etc/clairv4/config/ Create a Clair configuration file with the disable_updaters parameter set to true , for example: --- indexer: airgap: true --- matcher: disable_updaters: true --- Start Clair by using the container image, mounting in the configuration from the file you created: 8.2.3. Exporting the updaters bundle from a connected Clair instance Use the following procedure to export the updaters bundle from a Clair instance that has access to the internet. Prerequisites You have installed the clairctl command line utility tool. You have deployed Clair. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. Procedure From a Clair instance that has access to the internet, use the clairctl CLI tool with your configuration file to export the updaters bundle. For example: USD ./clairctl --config ./config.yaml export-updaters updates.gz 8.2.4. Configuring access to the Clair database in the disconnected OpenShift Container Platform cluster Use the following procedure to configure access to the Clair database in your disconnected OpenShift Container Platform cluster. Prerequisites You have installed the clairctl command line utility tool. You have deployed Clair. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. You have exported the updaters bundle from a Clair instance that has access to the internet. Procedure Determine your Clair database service by using the oc CLI tool, for example: USD oc get svc -n quay-enterprise Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h ... Forward the Clair database port so that it is accessible from the local machine. For example: USD oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432 Update your Clair config.yaml file, for example: indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json 1 Replace the value of the host in the multiple connstring fields with localhost . 2 For more information about the rhel-repository-scanner parameter, see "Mapping repositories to Common Product Enumeration information". 3 For more information about the rhel_containerscanner parameter, see "Mapping repositories to Common Product Enumeration information". 8.2.5. Importing the updaters bundle into the disconnected OpenShift Container Platform cluster Use the following procedure to import the updaters bundle into your disconnected OpenShift Container Platform cluster. Prerequisites You have installed the clairctl command line utility tool. You have deployed Clair. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. You have exported the updaters bundle from a Clair instance that has access to the internet. You have transferred the updaters bundle into your disconnected environment. Procedure Use the clairctl CLI tool to import the updaters bundle into the Clair database that is deployed by OpenShift Container Platform: USD ./clairctl --config ./clair-config.yaml import-updaters updates.gz 8.3. Mapping repositories to Common Product Enumeration information Note Currently, mapping repositories to Common Product Enumeration information is not supported on IBM Power and IBM Z. Clair's Red Hat Enterprise Linux (RHEL) scanner relies on a Common Product Enumeration (CPE) file to map RPM packages to the corresponding security data to produce matching results. These files are owned by product security and updated daily. The CPE file must be present, or access to the file must be allowed, for the scanner to properly process RPM packages. If the file is not present, RPM packages installed in the container image will not be scanned. Table 8.1. Clair CPE mapping files CPE Link to JSON mapping file repos2cpe Red Hat Repository-to-CPE JSON names2repos Red Hat Name-to-Repos JSON . In addition to uploading CVE information to the database for disconnected Clair installations, you must also make the mapping file available locally: For standalone Red Hat Quay and Clair deployments, the mapping file must be loaded into the Clair pod. For Red Hat Quay on OpenShift Container Platform deployments, you must set the Clair component to unmanaged . Then, Clair must be deployed manually, setting the configuration to load a local copy of the mapping file. 8.3.1. Mapping repositories to Common Product Enumeration example configuration Use the repo2cpe_mapping_file and name2repos_mapping_file fields in your Clair configuration to include the CPE JSON mapping files. For example: indexer: scanner: repo: rhel-repository-scanner: repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: name2repos_mapping_file: /data/repo-map.json For more information, see How to accurately match OVAL security data to installed RPMs .
|
[
"oc -n quay-enterprise exec example-registry-clair-app-64dd48f866-6ptgw -- cat /usr/bin/clairctl > clairctl",
"chmod u+x ./clairctl",
"oc get secret -n quay-enterprise example-registry-clair-config-secret -o \"jsonpath={USD.data['config\\.yaml']}\" | base64 -d > clair-config.yaml",
"--- indexer: airgap: true --- matcher: disable_updaters: true ---",
"./clairctl --config ./config.yaml export-updaters updates.gz",
"oc get svc -n quay-enterprise",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h",
"oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432",
"indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json",
"./clairctl --config ./clair-config.yaml import-updaters updates.gz",
"sudo podman cp clairv4:/usr/bin/clairctl ./clairctl",
"chmod u+x ./clairctl",
"mkdir /etc/clairv4/config/",
"--- indexer: airgap: true --- matcher: disable_updaters: true ---",
"sudo podman run -it --rm --name clairv4 -p 8081:8081 -p 8088:8088 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo -v /etc/clairv4/config:/clair:Z registry.redhat.io/quay/clair-rhel8:v3.13.3",
"./clairctl --config ./config.yaml export-updaters updates.gz",
"oc get svc -n quay-enterprise",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h",
"oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432",
"indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json",
"./clairctl --config ./clair-config.yaml import-updaters updates.gz",
"indexer: scanner: repo: rhel-repository-scanner: repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: name2repos_mapping_file: /data/repo-map.json"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/vulnerability_reporting_with_clair_on_red_hat_quay/clair-disconnected-environments
|
Chapter 4. Deprecated features
|
Chapter 4. Deprecated features The features deprecated in this release, and that were supported in releases of AMQ Streams, are outlined below. 4.1. Java 8 Support for Java 8 was deprecated in Kafka 3.0.0 and AMQ Streams 2.0. Java 8 will be unsupported for all AMQ Streams components, including clients, in the future. AMQ Streams supports Java 11. Use Java 11 when developing new applications. Plan to migrate any applications that currently use Java 8 to Java 11. 4.2. Kafka MirrorMaker 1 Kafka MirrorMaker replicates data between two or more active Kafka clusters, within or across data centers. Kafka MirrorMaker 1 is deprecated for Kafka 3.0.0 and will be removed in Kafka 4.0.0. MirrorMaker 2.0 will be the only version available. MirrorMaker 2.0 is based on the Kafka Connect framework, connectors managing the transfer of data between clusters. As a consequence, the AMQ Streams KafkaMirrorMaker custom resource which is used to deploy Kafka MirrorMaker 1 has been deprecated. The KafkaMirrorMaker resource will be removed from AMQ Streams when Kafka 4.0.0 is adopted. If you are using MirrorMaker 1 (referred to as just MirrorMaker in the AMQ Streams documentation), use the KafkaMirrorMaker2 custom resource with the IdentityReplicationPolicy . MirrorMaker 2.0 renames topics replicated to a target cluster. IdentityReplicationPolicy configuration overrides the automatic renaming. Use it to produce the same active/passive unidirectional replication as MirrorMaker 1. See Kafka MirrorMaker 2.0 cluster configuration . 4.3. Identity replication policy Identity replication policy is used with MirrorMaker 2.0 to override the automatic renaming of remote topics. Instead of prepending the name with the name of the source cluster, the topic retains its original name. This optional setting is useful for active/passive backups and data migration. The AMQ Streams Identity Replication Policy class ( io.strimzi.kafka.connect.mirror.IdentityReplicationPolicy ) is now deprecated and will be removed in the future. You can update to use Kafka's own Identity Replication Policy ( class org.apache.kafka.connect.mirror.IdentityReplicationPolicy ). See Kafka MirrorMaker 2.0 cluster configuration . 4.4. ListenerStatus type property The type property of ListenerStatus has been deprecated and will be removed in the future. ListenerStatus is used to specify the addresses of internal and external listeners. Instead of using the type , the addresses are now specified by name . See ListenerStatus schema reference . 4.5. Cruise Control capacity configuration The disk and cpuUtilization capacity configuration properties have been deprecated, are ignored, and will be removed in the future. The properties were used in setting capacity limits in optimization proposals to determine if resource-based optimization goals are being broken. Disk and CPU capacity limits are now automatically generated by AMQ Streams. See Cruise Control configuration .
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/release_notes_for_amq_streams_2.1_on_openshift/deprecated-features-str
|
Chapter 3. Automatically scaling pods with the Custom Metrics Autoscaler Operator
|
Chapter 3. Automatically scaling pods with the Custom Metrics Autoscaler Operator 3.1. Release notes 3.1.1. Custom Metrics Autoscaler Operator release notes The release notes for the Custom Metrics Autoscaler Operator for Red Hat OpenShift describe new features and enhancements, deprecated features, and known issues. The Custom Metrics Autoscaler Operator uses the Kubernetes-based Event Driven Autoscaler (KEDA) and is built on top of the OpenShift Container Platform horizontal pod autoscaler (HPA). Note The Custom Metrics Autoscaler Operator for Red Hat OpenShift is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. 3.1.1.1. Supported versions The following table defines the Custom Metrics Autoscaler Operator versions for each OpenShift Container Platform version. Version OpenShift Container Platform version General availability 2.14.1 4.16 General availability 2.14.1 4.15 General availability 2.14.1 4.14 General availability 2.14.1 4.13 General availability 2.14.1 4.12 General availability 3.1.1.2. Custom Metrics Autoscaler Operator 2.14.1-467 release notes This release of the Custom Metrics Autoscaler Operator 2.14.1-467 provides a CVE and a bug fix for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHSA-2024:7348 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA). 3.1.1.2.1. Bug fixes Previously, the root file system of the Custom Metrics Autoscaler Operator pod was writable, which is unnecessary and could present security issues. This update makes the pod root file system read-only, which addresses the potential security issue. ( OCPBUGS-37989 ) 3.1.2. Release notes for past releases of the Custom Metrics Autoscaler Operator The following release notes are for versions of the Custom Metrics Autoscaler Operator. For the current version, see Custom Metrics Autoscaler Operator release notes . 3.1.2.1. Custom Metrics Autoscaler Operator 2.14.1-454 release notes This release of the Custom Metrics Autoscaler Operator 2.14.1-454 provides a CVE, a new feature, and bug fixes for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHBA-2024:5865 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA). 3.1.2.1.1. New features and enhancements 3.1.2.1.1.1. Support for the Cron trigger with the Custom Metrics Autoscaler Operator The Custom Metrics Autoscaler Operator can now use the Cron trigger to scale pods based on an hourly schedule. When your specified time frame starts, the Custom Metrics Autoscaler Operator scales pods to your desired amount. When the time frame ends, the Operator scales back down to the level. For more information, see Understanding the Cron trigger . 3.1.2.1.2. Bug fixes Previously, if you made changes to audit configuration parameters in the KedaController custom resource, the keda-metrics-server-audit-policy config map would not get updated. As a consequence, you could not change the audit configuration parameters after the initial deployment of the Custom Metrics Autoscaler. With this fix, changes to the audit configuration now render properly in the config map, allowing you to change the audit configuration any time after installation. ( OCPBUGS-32521 ) 3.1.2.2. Custom Metrics Autoscaler Operator 2.13.1 release notes This release of the Custom Metrics Autoscaler Operator 2.13.1-421 provides a new feature and a bug fix for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHBA-2024:4837 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA). 3.1.2.2.1. New features and enhancements 3.1.2.2.1.1. Support for custom certificates with the Custom Metrics Autoscaler Operator The Custom Metrics Autoscaler Operator can now use custom service CA certificates to connect securely to TLS-enabled metrics sources, such as an external Kafka cluster or an external Prometheus service. By default, the Operator uses automatically-generated service certificates to connect to on-cluster services only. There is a new field in the KedaController object that allows you to load custom server CA certificates for connecting to external services by using config maps. For more information, see Custom CA certificates for the Custom Metrics Autoscaler . 3.1.2.2.2. Bug fixes Previously, the custom-metrics-autoscaler and custom-metrics-autoscaler-adapter images were missing time zone information. As a consequence, scaled objects with cron triggers failed to work because the controllers were unable to find time zone information. With this fix, the image builds are updated to include time zone information. As a result, scaled objects containing cron triggers now function properly. Scaled objects containing cron triggers are currently not supported for the custom metrics autoscaler. ( OCPBUGS-34018 ) 3.1.2.3. Custom Metrics Autoscaler Operator 2.12.1-394 release notes This release of the Custom Metrics Autoscaler Operator 2.12.1-394 provides a bug fix for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHSA-2024:2901 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA). 3.1.2.3.1. Bug fixes Previously, the protojson.Unmarshal function entered into an infinite loop when unmarshaling certain forms of invalid JSON. This condition could occur when unmarshaling into a message that contains a google.protobuf.Any value or when the UnmarshalOptions.DiscardUnknown option is set. This release fixes this issue. ( OCPBUGS-30305 ) Previously, when parsing a multipart form, either explicitly with the Request.ParseMultipartForm method or implicitly with the Request.FormValue , Request.PostFormValue , or Request.FormFile method, the limits on the total size of the parsed form were not applied to the memory consumed. This could cause memory exhaustion. With this fix, the parsing process now correctly limits the maximum size of form lines while reading a single form line. ( OCPBUGS-30360 ) Previously, when following an HTTP redirect to a domain that is not on a matching subdomain or on an exact match of the initial domain, an HTTP client would not forward sensitive headers, such as Authorization or Cookie . For example, a redirect from example.com to www.example.com would forward the Authorization header, but a redirect to www.example.org would not forward the header. This release fixes this issue. ( OCPBUGS-30365 ) Previously, verifying a certificate chain that contains a certificate with an unknown public key algorithm caused the certificate verification process to panic. This condition affected all crypto and Transport Layer Security (TLS) clients and servers that set the Config.ClientAuth parameter to the VerifyClientCertIfGiven or RequireAndVerifyClientCert value. The default behavior is for TLS servers to not verify client certificates. This release fixes this issue. ( OCPBUGS-30370 ) Previously, if errors returned from the MarshalJSON method contained user-controlled data, an attacker could have used the data to break the contextual auto-escaping behavior of the HTML template package. This condition would allow for subsequent actions to inject unexpected content into the templates. This release fixes this issue. ( OCPBUGS-30397 ) Previously, the net/http and golang.org/x/net/http2 Go packages did not limit the number of CONTINUATION frames for an HTTP/2 request. This condition could result in excessive CPU consumption. This release fixes this issue. ( OCPBUGS-30894 ) 3.1.2.4. Custom Metrics Autoscaler Operator 2.12.1-384 release notes This release of the Custom Metrics Autoscaler Operator 2.12.1-384 provides a bug fix for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHBA-2024:2043 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.4.1. Bug fixes Previously, the custom-metrics-autoscaler and custom-metrics-autoscaler-adapter images were missing time zone information. As a consequence, scaled objects with cron triggers failed to work because the controllers were unable to find time zone information. With this fix, the image builds are updated to include time zone information. As a result, scaled objects containing cron triggers now function properly. ( OCPBUGS-32395 ) 3.1.2.5. Custom Metrics Autoscaler Operator 2.12.1-376 release notes This release of the Custom Metrics Autoscaler Operator 2.12.1-376 provides security updates and bug fixes for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHSA-2024:1812 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.5.1. Bug fixes Previously, if invalid values such as nonexistent namespaces were specified in scaled object metadata, the underlying scaler clients would not free, or close, their client descriptors, resulting in a slow memory leak. This fix properly closes the underlying client descriptors when there are errors, preventing memory from leaking. ( OCPBUGS-30145 ) Previously the ServiceMonitor custom resource (CR) for the keda-metrics-apiserver pod was not functioning, because the CR referenced an incorrect metrics port name of http . This fix corrects the ServiceMonitor CR to reference the proper port name of metrics . As a result, the Service Monitor functions properly. ( OCPBUGS-25806 ) 3.1.2.6. Custom Metrics Autoscaler Operator 2.11.2-322 release notes This release of the Custom Metrics Autoscaler Operator 2.11.2-322 provides security updates and bug fixes for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHSA-2023:6144 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.6.1. Bug fixes Because the Custom Metrics Autoscaler Operator version 3.11.2-311 was released without a required volume mount in the Operator deployment, the Custom Metrics Autoscaler Operator pod would restart every 15 minutes. This fix adds the required volume mount to the Operator deployment. As a result, the Operator no longer restarts every 15 minutes. ( OCPBUGS-22361 ) 3.1.2.7. Custom Metrics Autoscaler Operator 2.11.2-311 release notes This release of the Custom Metrics Autoscaler Operator 2.11.2-311 provides new features and bug fixes for running the Operator in an OpenShift Container Platform cluster. The components of the Custom Metrics Autoscaler Operator 2.11.2-311 were released in RHBA-2023:5981 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.7.1. New features and enhancements 3.1.2.7.1.1. Red Hat OpenShift Service on AWS (ROSA) and OpenShift Dedicated are now supported The Custom Metrics Autoscaler Operator 2.11.2-311 can be installed on OpenShift ROSA and OpenShift Dedicated managed clusters. versions of the Custom Metrics Autoscaler Operator could be installed only in the openshift-keda namespace. This prevented the Operator from being installed on OpenShift ROSA and OpenShift Dedicated clusters. This version of Custom Metrics Autoscaler allows installation to other namespaces such as openshift-operators or keda , enabling installation into ROSA and Dedicated clusters. 3.1.2.7.2. Bug fixes Previously, if the Custom Metrics Autoscaler Operator was installed and configured, but not in use, the OpenShift CLI reported the couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1 error after any oc command was entered. The message, although harmless, could have caused confusion. With this fix, the Got empty response for: external.metrics... error no longer appears inappropriately. ( OCPBUGS-15779 ) Previously, any annotation or label change to objects managed by the Custom Metrics Autoscaler were reverted by Custom Metrics Autoscaler Operator any time the Keda Controller was modified, for example after a configuration change. This caused continuous changing of labels in your objects. The Custom Metrics Autoscaler now uses its own annotation to manage labels and annotations, and annotation or label are no longer inappropriately reverted. ( OCPBUGS-15590 ) 3.1.2.8. Custom Metrics Autoscaler Operator 2.10.1-267 release notes This release of the Custom Metrics Autoscaler Operator 2.10.1-267 provides new features and bug fixes for running the Operator in an OpenShift Container Platform cluster. The components of the Custom Metrics Autoscaler Operator 2.10.1-267 were released in RHBA-2023:4089 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.8.1. Bug fixes Previously, the custom-metrics-autoscaler and custom-metrics-autoscaler-adapter images did not contain time zone information. Because of this, scaled objects with cron triggers failed to work because the controllers were unable to find time zone information. With this fix, the image builds now include time zone information. As a result, scaled objects containing cron triggers now function properly. ( OCPBUGS-15264 ) Previously, the Custom Metrics Autoscaler Operator would attempt to take ownership of all managed objects, including objects in other namespaces and cluster-scoped objects. Because of this, the Custom Metrics Autoscaler Operator was unable to create the role binding for reading the credentials necessary to be an API server. This caused errors in the kube-system namespace. With this fix, the Custom Metrics Autoscaler Operator skips adding the ownerReference field to any object in another namespace or any cluster-scoped object. As a result, the role binding is now created without any errors. ( OCPBUGS-15038 ) Previously, the Custom Metrics Autoscaler Operator added an ownerReferences field to the openshift-keda namespace. While this did not cause functionality problems, the presence of this field could have caused confusion for cluster administrators. With this fix, the Custom Metrics Autoscaler Operator does not add the ownerReference field to the openshift-keda namespace. As a result, the openshift-keda namespace no longer has a superfluous ownerReference field. ( OCPBUGS-15293 ) Previously, if you used a Prometheus trigger configured with authentication method other than pod identity, and the podIdentity parameter was set to none , the trigger would fail to scale. With this fix, the Custom Metrics Autoscaler for OpenShift now properly handles the none pod identity provider type. As a result, a Prometheus trigger configured with authentication method other than pod identity, and the podIdentity parameter sset to none now properly scales. ( OCPBUGS-15274 ) 3.1.2.9. Custom Metrics Autoscaler Operator 2.10.1 release notes This release of the Custom Metrics Autoscaler Operator 2.10.1 provides new features and bug fixes for running the Operator in an OpenShift Container Platform cluster. The components of the Custom Metrics Autoscaler Operator 2.10.1 were released in RHEA-2023:3199 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.9.1. New features and enhancements 3.1.2.9.1.1. Custom Metrics Autoscaler Operator general availability The Custom Metrics Autoscaler Operator is now generally available as of Custom Metrics Autoscaler Operator version 2.10.1. Important Scaling by using a scaled job is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 3.1.2.9.1.2. Performance metrics You can now use the Prometheus Query Language (PromQL) to query metrics on the Custom Metrics Autoscaler Operator. 3.1.2.9.1.3. Pausing the custom metrics autoscaling for scaled objects You can now pause the autoscaling of a scaled object, as needed, and resume autoscaling when ready. 3.1.2.9.1.4. Replica fall back for scaled objects You can now specify the number of replicas to fall back to if a scaled object fails to get metrics from the source. 3.1.2.9.1.5. Customizable HPA naming for scaled objects You can now specify a custom name for the horizontal pod autoscaler in scaled objects. 3.1.2.9.1.6. Activation and scaling thresholds Because the horizontal pod autoscaler (HPA) cannot scale to or from 0 replicas, the Custom Metrics Autoscaler Operator does that scaling, after which the HPA performs the scaling. You can now specify when the HPA takes over autoscaling, based on the number of replicas. This allows for more flexibility with your scaling policies. 3.1.2.10. Custom Metrics Autoscaler Operator 2.8.2-174 release notes This release of the Custom Metrics Autoscaler Operator 2.8.2-174 provides new features and bug fixes for running the Operator in an OpenShift Container Platform cluster. The components of the Custom Metrics Autoscaler Operator 2.8.2-174 were released in RHEA-2023:1683 . Important The Custom Metrics Autoscaler Operator version 2.8.2-174 is a Technology Preview feature. 3.1.2.10.1. New features and enhancements 3.1.2.10.1.1. Operator upgrade support You can now upgrade from a prior version of the Custom Metrics Autoscaler Operator. See "Changing the update channel for an Operator" in the "Additional resources" for information on upgrading an Operator. 3.1.2.10.1.2. must-gather support You can now collect data about the Custom Metrics Autoscaler Operator and its components by using the OpenShift Container Platform must-gather tool. Currently, the process for using the must-gather tool with the Custom Metrics Autoscaler is different than for other operators. See "Gathering debugging data in the "Additional resources" for more information. 3.1.2.11. Custom Metrics Autoscaler Operator 2.8.2 release notes This release of the Custom Metrics Autoscaler Operator 2.8.2 provides new features and bug fixes for running the Operator in an OpenShift Container Platform cluster. The components of the Custom Metrics Autoscaler Operator 2.8.2 were released in RHSA-2023:1042 . Important The Custom Metrics Autoscaler Operator version 2.8.2 is a Technology Preview feature. 3.1.2.11.1. New features and enhancements 3.1.2.11.1.1. Audit Logging You can now gather and view audit logs for the Custom Metrics Autoscaler Operator and its associated components. Audit logs are security-relevant chronological sets of records that document the sequence of activities that have affected the system by individual users, administrators, or other components of the system. 3.1.2.11.1.2. Scale applications based on Apache Kafka metrics You can now use the KEDA Apache kafka trigger/scaler to scale deployments based on an Apache Kafka topic. 3.1.2.11.1.3. Scale applications based on CPU metrics You can now use the KEDA CPU trigger/scaler to scale deployments based on CPU metrics. 3.1.2.11.1.4. Scale applications based on memory metrics You can now use the KEDA memory trigger/scaler to scale deployments based on memory metrics. 3.2. Custom Metrics Autoscaler Operator overview As a developer, you can use Custom Metrics Autoscaler Operator for Red Hat OpenShift to specify how OpenShift Container Platform should automatically increase or decrease the number of pods for a deployment, stateful set, custom resource, or job based on custom metrics that are not based only on CPU or memory. The Custom Metrics Autoscaler Operator is an optional Operator, based on the Kubernetes Event Driven Autoscaler (KEDA), that allows workloads to be scaled using additional metrics sources other than pod metrics. The custom metrics autoscaler currently supports only the Prometheus, CPU, memory, and Apache Kafka metrics. The Custom Metrics Autoscaler Operator scales your pods up and down based on custom, external metrics from specific applications. Your other applications continue to use other scaling methods. You configure triggers , also known as scalers, which are the source of events and metrics that the custom metrics autoscaler uses to determine how to scale. The custom metrics autoscaler uses a metrics API to convert the external metrics to a form that OpenShift Container Platform can use. The custom metrics autoscaler creates a horizontal pod autoscaler (HPA) that performs the actual scaling. To use the custom metrics autoscaler, you create a ScaledObject or ScaledJob object for a workload, which is a custom resource (CR) that defines the scaling metadata. You specify the deployment or job to scale, the source of the metrics to scale on (trigger), and other parameters such as the minimum and maximum replica counts allowed. Note You can create only one scaled object or scaled job for each workload that you want to scale. Also, you cannot use a scaled object or scaled job and the horizontal pod autoscaler (HPA) on the same workload. The custom metrics autoscaler, unlike the HPA, can scale to zero. If you set the minReplicaCount value in the custom metrics autoscaler CR to 0 , the custom metrics autoscaler scales the workload down from 1 to 0 replicas to or up from 0 replicas to 1. This is known as the activation phase . After scaling up to 1 replica, the HPA takes control of the scaling. This is known as the scaling phase . Some triggers allow you to change the number of replicas that are scaled by the cluster metrics autoscaler. In all cases, the parameter to configure the activation phase always uses the same phrase, prefixed with activation . For example, if the threshold parameter configures scaling, activationThreshold would configure activation. Configuring the activation and scaling phases allows you more flexibility with your scaling policies. For example, you can configure a higher activation phase to prevent scaling up or down if the metric is particularly low. The activation value has more priority than the scaling value in case of different decisions for each. For example, if the threshold is set to 10 , and the activationThreshold is 50 , if the metric reports 40 , the scaler is not active and the pods are scaled to zero even if the HPA requires 4 instances. Figure 3.1. Custom metrics autoscaler workflow You create or modify a scaled object custom resource for a workload on a cluster. The object contains the scaling configuration for that workload. Prior to accepting the new object, the OpenShift API server sends it to the custom metrics autoscaler admission webhooks process to ensure that the object is valid. If validation succeeds, the API server persists the object. The custom metrics autoscaler controller watches for new or modified scaled objects. When the OpenShift API server notifies the controller of a change, the controller monitors any external trigger sources, also known as data sources, that are specified in the object for changes to the metrics data. One or more scalers request scaling data from the external trigger source. For example, for a Kafka trigger type, the controller uses the Kafka scaler to communicate with a Kafka instance to obtain the data requested by the trigger. The controller creates a horizontal pod autoscaler object for the scaled object. As a result, the Horizontal Pod Autoscaler (HPA) Operator starts monitoring the scaling data associated with the trigger. The HPA requests scaling data from the cluster OpenShift API server endpoint. The OpenShift API server endpoint is served by the custom metrics autoscaler metrics adapter. When the metrics adapter receives a request for custom metrics, it uses a GRPC connection to the controller to request it for the most recent trigger data received from the scaler. The HPA makes scaling decisions based upon the data received from the metrics adapter and scales the workload up or down by increasing or decreasing the replicas. As a it operates, a workload can affect the scaling metrics. For example, if a workload is scaled up to handle work in a Kafka queue, the queue size decreases after the workload processes all the work. As a result, the workload is scaled down. If the metrics are in a range specified by the minReplicaCount value, the custom metrics autoscaler controller disables all scaling, and leaves the replica count at a fixed level. If the metrics exceed that range, the custom metrics autoscaler controller enables scaling and allows the HPA to scale the workload. While scaling is disabled, the HPA does not take any action. 3.2.1. Custom CA certificates for the Custom Metrics Autoscaler By default, the Custom Metrics Autoscaler Operator uses automatically-generated service CA certificates to connect to on-cluster services. If you want to use off-cluster services that require custom CA certificates, you can add the required certificates to a config map. Then, add the config map to the KedaController custom resource as described in Installing the custom metrics autoscaler . The Operator loads those certificates on start-up and registers them as trusted by the Operator. The config maps can contain one or more certificate files that contain one or more PEM-encoded CA certificates. Or, you can use separate config maps for each certificate file. Note If you later update the config map to add additional certificates, you must restart the keda-operator-* pod for the changes to take effect. 3.3. Installing the custom metrics autoscaler You can use the OpenShift Container Platform web console to install the Custom Metrics Autoscaler Operator. The installation creates the following five CRDs: ClusterTriggerAuthentication KedaController ScaledJob ScaledObject TriggerAuthentication 3.3.1. Installing the custom metrics autoscaler You can use the following procedure to install the Custom Metrics Autoscaler Operator. Prerequisites Remove any previously-installed Technology Preview versions of the Cluster Metrics Autoscaler Operator. Remove any versions of the community-based KEDA. Also, remove the KEDA 1.x custom resource definitions by running the following commands: USD oc delete crd scaledobjects.keda.k8s.io USD oc delete crd triggerauthentications.keda.k8s.io Optional: If you need the Custom Metrics Autoscaler Operator to connect to off-cluster services, such as an external Kafka cluster or an external Prometheus service, put any required service CA certificates into a config map. The config map must exist in the same namespace where the Operator is installed. For example: USD oc create configmap -n openshift-keda thanos-cert --from-file=ca-cert.pem Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Custom Metrics Autoscaler from the list of available Operators, and click Install . On the Install Operator page, ensure that the All namespaces on the cluster (default) option is selected for Installation Mode . This installs the Operator in all namespaces. Ensure that the openshift-keda namespace is selected for Installed Namespace . OpenShift Container Platform creates the namespace, if not present in your cluster. Click Install . Verify the installation by listing the Custom Metrics Autoscaler Operator components: Navigate to Workloads Pods . Select the openshift-keda project from the drop-down menu and verify that the custom-metrics-autoscaler-operator-* pod is running. Navigate to Workloads Deployments to verify that the custom-metrics-autoscaler-operator deployment is running. Optional: Verify the installation in the OpenShift CLI using the following commands: USD oc get all -n openshift-keda The output appears similar to the following: Example output NAME READY STATUS RESTARTS AGE pod/custom-metrics-autoscaler-operator-5fd8d9ffd8-xt4xp 1/1 Running 0 18m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/custom-metrics-autoscaler-operator 1/1 1 1 18m NAME DESIRED CURRENT READY AGE replicaset.apps/custom-metrics-autoscaler-operator-5fd8d9ffd8 1 1 1 18m Install the KedaController custom resource, which creates the required CRDs: In the OpenShift Container Platform web console, click Operators Installed Operators . Click Custom Metrics Autoscaler . On the Operator Details page, click the KedaController tab. On the KedaController tab, click Create KedaController and edit the file. kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: watchNamespace: '' 1 operator: logLevel: info 2 logEncoder: console 3 caConfigMaps: 4 - thanos-cert - kafka-cert metricsServer: logLevel: '0' 5 auditConfig: 6 logFormat: "json" logOutputVolumeClaim: "persistentVolumeClaimName" policy: rules: - level: Metadata omitStages: ["RequestReceived"] omitManagedFields: false lifetime: maxAge: "2" maxBackup: "1" maxSize: "50" serviceAccount: {} 1 Specifies a single namespace in which the Custom Metrics Autoscaler Operator should scale applications. Leave it blank or leave it empty to scale applications in all namespaces. This field should have a namespace or be empty. The default value is empty. 2 Specifies the level of verbosity for the Custom Metrics Autoscaler Operator log messages. The allowed values are debug , info , error . The default is info . 3 Specifies the logging format for the Custom Metrics Autoscaler Operator log messages. The allowed values are console or json . The default is console . 4 Optional: Specifies one or more config maps with CA certificates, which the Custom Metrics Autoscaler Operator can use to connect securely to TLS-enabled metrics sources. 5 Specifies the logging level for the Custom Metrics Autoscaler Metrics Server. The allowed values are 0 for info and 4 for debug . The default is 0 . 6 Activates audit logging for the Custom Metrics Autoscaler Operator and specifies the audit policy to use, as described in the "Configuring audit logging" section. Click Create to create the KEDA controller. 3.4. Understanding custom metrics autoscaler triggers Triggers, also known as scalers, provide the metrics that the Custom Metrics Autoscaler Operator uses to scale your pods. The custom metrics autoscaler currently supports the Prometheus, CPU, memory, Apache Kafka, and cron triggers. You use a ScaledObject or ScaledJob custom resource to configure triggers for specific objects, as described in the sections that follow. You can configure a certificate authority to use with your scaled objects or for all scalers in the cluster . 3.4.1. Understanding the Prometheus trigger You can scale pods based on Prometheus metrics, which can use the installed OpenShift Container Platform monitoring or an external Prometheus server as the metrics source. See "Configuring the custom metrics autoscaler to use OpenShift Container Platform monitoring" for information on the configurations required to use the OpenShift Container Platform monitoring as a source for metrics. Note If Prometheus is collecting metrics from the application that the custom metrics autoscaler is scaling, do not set the minimum replicas to 0 in the custom resource. If there are no application pods, the custom metrics autoscaler does not have any metrics to scale on. Example scaled object with a Prometheus target apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: prom-scaledobject namespace: my-namespace spec: # ... triggers: - type: prometheus 1 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 2 namespace: kedatest 3 metricName: http_requests_total 4 threshold: '5' 5 query: sum(rate(http_requests_total{job="test-app"}[1m])) 6 authModes: basic 7 cortexOrgID: my-org 8 ignoreNullValues: "false" 9 unsafeSsl: "false" 10 1 Specifies Prometheus as the trigger type. 2 Specifies the address of the Prometheus server. This example uses OpenShift Container Platform monitoring. 3 Optional: Specifies the namespace of the object you want to scale. This parameter is mandatory if using OpenShift Container Platform monitoring as a source for the metrics. 4 Specifies the name to identify the metric in the external.metrics.k8s.io API. If you are using more than one trigger, all metric names must be unique. 5 Specifies the value that triggers scaling. Must be specified as a quoted string value. 6 Specifies the Prometheus query to use. 7 Specifies the authentication method to use. Prometheus scalers support bearer authentication ( bearer ), basic authentication ( basic ), or TLS authentication ( tls ). You configure the specific authentication parameters in a trigger authentication, as discussed in a following section. As needed, you can also use a secret. 8 Optional: Passes the X-Scope-OrgID header to multi-tenant Cortex or Mimir storage for Prometheus. This parameter is required only with multi-tenant Prometheus storage, to indicate which data Prometheus should return. 9 Optional: Specifies how the trigger should proceed if the Prometheus target is lost. If true , the trigger continues to operate if the Prometheus target is lost. This is the default behavior. If false , the trigger returns an error if the Prometheus target is lost. 10 Optional: Specifies whether the certificate check should be skipped. For example, you might skip the check if you are running in a test environment and using self-signed certificates at the Prometheus endpoint. If false , the certificate check is performed. This is the default behavior. If true , the certificate check is not performed. Important Skipping the check is not recommended. 3.4.1.1. Configuring the custom metrics autoscaler to use OpenShift Container Platform monitoring You can use the installed OpenShift Container Platform Prometheus monitoring as a source for the metrics used by the custom metrics autoscaler. However, there are some additional configurations you must perform. Note These steps are not required for an external Prometheus source. You must perform the following tasks, as described in this section: Create a service account to get a token. Create a role. Add that role to the service account. Reference the token in the trigger authentication object used by Prometheus. Prerequisites OpenShift Container Platform monitoring must be installed. Monitoring of user-defined workloads must be enabled in OpenShift Container Platform monitoring, as described in the Creating a user-defined workload monitoring config map section. The Custom Metrics Autoscaler Operator must be installed. Procedure Change to the project with the object you want to scale: USD oc project my-project Use the following command to create a service account, if your cluster does not have one: USD oc create serviceaccount <service_account> where: <service_account> Specifies the name of the service account. Use the following command to locate the token assigned to the service account: USD oc describe serviceaccount <service_account> where: <service_account> Specifies the name of the service account. Example output Name: thanos Namespace: my-project Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-nnwgj Mountable secrets: thanos-dockercfg-nnwgj Tokens: thanos-token-9g4n5 1 Events: <none> 1 Use this token in the trigger authentication. Create a trigger authentication with the service account token: Create a YAML file similar to the following: apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: keda-trigger-auth-prometheus spec: secretTargetRef: 1 - parameter: bearerToken 2 name: thanos-token-9g4n5 3 key: token 4 - parameter: ca name: thanos-token-9g4n5 key: ca.crt 1 Specifies that this object uses a secret for authorization. 2 Specifies the authentication parameter to supply by using the token. 3 Specifies the name of the token to use. 4 Specifies the key in the token to use with the specified parameter. Create the CR object: USD oc create -f <file-name>.yaml Create a role for reading Thanos metrics: Create a YAML file with the following parameters: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader rules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch Create the CR object: USD oc create -f <file-name>.yaml Create a role binding for reading Thanos metrics: Create a YAML file similar to the following: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: thanos-metrics-reader 1 namespace: my-project 2 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: thanos-metrics-reader subjects: - kind: ServiceAccount name: thanos 3 namespace: my-project 4 1 Specifies the name of the role you created. 2 Specifies the namespace of the object you want to scale. 3 Specifies the name of the service account to bind to the role. 4 Specifies the namespace of the object you want to scale. Create the CR object: USD oc create -f <file-name>.yaml You can now deploy a scaled object or scaled job to enable autoscaling for your application, as described in "Understanding how to add custom metrics autoscalers". To use OpenShift Container Platform monitoring as the source, in the trigger, or scaler, you must include the following parameters: triggers.type must be prometheus triggers.metadata.serverAddress must be https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 triggers.metadata.authModes must be bearer triggers.metadata.namespace must be set to the namespace of the object to scale triggers.authenticationRef must point to the trigger authentication resource specified in the step 3.4.2. Understanding the CPU trigger You can scale pods based on CPU metrics. This trigger uses cluster metrics as the source for metrics. The custom metrics autoscaler scales the pods associated with an object to maintain the CPU usage that you specify. The autoscaler increases or decreases the number of replicas between the minimum and maximum numbers to maintain the specified CPU utilization across all pods. The memory trigger considers the memory utilization of the entire pod. If the pod has multiple containers, the memory trigger considers the total memory utilization of all containers in the pod. Note This trigger cannot be used with the ScaledJob custom resource. When using a memory trigger to scale an object, the object does not scale to 0 , even if you are using multiple triggers. Example scaled object with a CPU target apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cpu-scaledobject namespace: my-namespace spec: # ... triggers: - type: cpu 1 metricType: Utilization 2 metadata: value: '60' 3 minReplicaCount: 1 4 1 Specifies CPU as the trigger type. 2 Specifies the type of metric to use, either Utilization or AverageValue . 3 Specifies the value that triggers scaling. Must be specified as a quoted string value. When using Utilization , the target value is the average of the resource metrics across all relevant pods, represented as a percentage of the requested value of the resource for the pods. When using AverageValue , the target value is the average of the metrics across all relevant pods. 4 Specifies the minimum number of replicas when scaling down. For a CPU trigger, enter a value of 1 or greater, because the HPA cannot scale to zero if you are using only CPU metrics. 3.4.3. Understanding the memory trigger You can scale pods based on memory metrics. This trigger uses cluster metrics as the source for metrics. The custom metrics autoscaler scales the pods associated with an object to maintain the average memory usage that you specify. The autoscaler increases and decreases the number of replicas between the minimum and maximum numbers to maintain the specified memory utilization across all pods. The memory trigger considers the memory utilization of entire pod. If the pod has multiple containers, the memory utilization is the sum of all of the containers. Note This trigger cannot be used with the ScaledJob custom resource. When using a memory trigger to scale an object, the object does not scale to 0 , even if you are using multiple triggers. Example scaled object with a memory target apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: memory-scaledobject namespace: my-namespace spec: # ... triggers: - type: memory 1 metricType: Utilization 2 metadata: value: '60' 3 containerName: api 4 1 Specifies memory as the trigger type. 2 Specifies the type of metric to use, either Utilization or AverageValue . 3 Specifies the value that triggers scaling. Must be specified as a quoted string value. When using Utilization , the target value is the average of the resource metrics across all relevant pods, represented as a percentage of the requested value of the resource for the pods. When using AverageValue , the target value is the average of the metrics across all relevant pods. 4 Optional: Specifies an individual container to scale, based on the memory utilization of only that container, rather than the entire pod. In this example, only the container named api is to be scaled. 3.4.4. Understanding the Kafka trigger You can scale pods based on an Apache Kafka topic or other services that support the Kafka protocol. The custom metrics autoscaler does not scale higher than the number of Kafka partitions, unless you set the allowIdleConsumers parameter to true in the scaled object or scaled job. Note If the number of consumer groups exceeds the number of partitions in a topic, the extra consumer groups remain idle. To avoid this, by default the number of replicas does not exceed: The number of partitions on a topic, if a topic is specified The number of partitions of all topics in the consumer group, if no topic is specified The maxReplicaCount specified in scaled object or scaled job CR You can use the allowIdleConsumers parameter to disable these default behaviors. Example scaled object with a Kafka target apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: kafka-scaledobject namespace: my-namespace spec: # ... triggers: - type: kafka 1 metadata: topic: my-topic 2 bootstrapServers: my-cluster-kafka-bootstrap.openshift-operators.svc:9092 3 consumerGroup: my-group 4 lagThreshold: '10' 5 activationLagThreshold: '5' 6 offsetResetPolicy: latest 7 allowIdleConsumers: true 8 scaleToZeroOnInvalidOffset: false 9 excludePersistentLag: false 10 version: '1.0.0' 11 partitionLimitation: '1,2,10-20,31' 12 tls: enable 13 1 Specifies Kafka as the trigger type. 2 Specifies the name of the Kafka topic on which Kafka is processing the offset lag. 3 Specifies a comma-separated list of Kafka brokers to connect to. 4 Specifies the name of the Kafka consumer group used for checking the offset on the topic and processing the related lag. 5 Optional: Specifies the average target value that triggers scaling. Must be specified as a quoted string value. The default is 5 . 6 Optional: Specifies the target value for the activation phase. Must be specified as a quoted string value. 7 Optional: Specifies the Kafka offset reset policy for the Kafka consumer. The available values are: latest and earliest . The default is latest . 8 Optional: Specifies whether the number of Kafka replicas can exceed the number of partitions on a topic. If true , the number of Kafka replicas can exceed the number of partitions on a topic. This allows for idle Kafka consumers. If false , the number of Kafka replicas cannot exceed the number of partitions on a topic. This is the default. 9 Specifies how the trigger behaves when a Kafka partition does not have a valid offset. If true , the consumers are scaled to zero for that partition. If false , the scaler keeps a single consumer for that partition. This is the default. 10 Optional: Specifies whether the trigger includes or excludes partition lag for partitions whose current offset is the same as the current offset of the polling cycle. If true , the scaler excludes partition lag in these partitions. If false , the trigger includes all consumer lag in all partitions. This is the default. 11 Optional: Specifies the version of your Kafka brokers. Must be specified as a quoted string value. The default is 1.0.0 . 12 Optional: Specifies a comma-separated list of partition IDs to scope the scaling on. If set, only the listed IDs are considered when calculating lag. Must be specified as a quoted string value. The default is to consider all partitions. 13 Optional: Specifies whether to use TSL client authentication for Kafka. The default is disable . For information on configuring TLS, see "Understanding custom metrics autoscaler trigger authentications". 3.4.5. Understanding the Cron trigger You can scale pods based on a time range. When the time range starts, the custom metrics autoscaler scales the pods associated with an object from the configured minimum number of pods to the specified number of desired pods. At the end of the time range, the pods are scaled back to the configured minimum. The time period must be configured in cron format . The following example scales the pods associated with this scaled object from 0 to 100 from 6:00 AM to 6:30 PM India Standard Time. Example scaled object with a Cron trigger apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cron-scaledobject namespace: default spec: scaleTargetRef: name: my-deployment minReplicaCount: 0 1 maxReplicaCount: 100 2 cooldownPeriod: 300 triggers: - type: cron 3 metadata: timezone: Asia/Kolkata 4 start: "0 6 * * *" 5 end: "30 18 * * *" 6 desiredReplicas: "100" 7 1 Specifies the minimum number of pods to scale down to at the end of the time frame. 2 Specifies the maximum number of replicas when scaling up. This value should be the same as desiredReplicas . The default is 100 . 3 Specifies a Cron trigger. 4 Specifies the timezone for the time frame. This value must be from the IANA Time Zone Database . 5 Specifies the start of the time frame. 6 Specifies the end of the time frame. 7 Specifies the number of pods to scale to between the start and end of the time frame. This value should be the same as maxReplicaCount . 3.5. Understanding custom metrics autoscaler trigger authentications A trigger authentication allows you to include authentication information in a scaled object or a scaled job that can be used by the associated containers. You can use trigger authentications to pass OpenShift Container Platform secrets, platform-native pod authentication mechanisms, environment variables, and so on. You define a TriggerAuthentication object in the same namespace as the object that you want to scale. That trigger authentication can be used only by objects in that namespace. Alternatively, to share credentials between objects in multiple namespaces, you can create a ClusterTriggerAuthentication object that can be used across all namespaces. Trigger authentications and cluster trigger authentication use the same configuration. However, a cluster trigger authentication requires an additional kind parameter in the authentication reference of the scaled object. Example secret for Basic authentication apiVersion: v1 kind: Secret metadata: name: my-basic-secret namespace: default data: username: "dXNlcm5hbWU=" 1 password: "cGFzc3dvcmQ=" 1 User name and password to supply to the trigger authentication. The values in a data stanza must be base-64 encoded. Example trigger authentication using a secret for Basic authentication kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint. 3 Specifies the authentication parameter to supply by using the secret. 4 Specifies the name of the secret to use. 5 Specifies the key in the secret to use with the specified parameter. Example cluster trigger authentication with a secret for Basic authentication kind: ClusterTriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: 1 name: secret-cluster-triggerauthentication spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password 1 Note that no namespace is used with a cluster trigger authentication. 2 Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint. 3 Specifies the authentication parameter to supply by using the secret. 4 Specifies the name of the secret to use. 5 Specifies the key in the secret to use with the specified parameter. Example secret with certificate authority (CA) details apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0... 1 client-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... 2 client-key.pem: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0t... 1 Specifies the TLS CA Certificate for authentication of the metrics endpoint. The value must be base-64 encoded. 2 Specifies the TLS certificates and key for TLS client authentication. The values must be base-64 encoded. Example trigger authentication using a secret for CA details kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: key 3 name: my-secret 4 key: client-key.pem 5 - parameter: ca 6 name: my-secret 7 key: ca-cert.pem 8 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint. 3 Specifies the type of authentication to use. 4 Specifies the name of the secret to use. 5 Specifies the key in the secret to use with the specified parameter. 6 Specifies the authentication parameter for a custom CA when connecting to the metrics endpoint. 7 Specifies the name of the secret to use. 8 Specifies the key in the secret to use with the specified parameter. Example secret with a bearer token apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: bearerToken: "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXV" 1 1 Specifies a bearer token to use with bearer authentication. The value in a data stanza must be base-64 encoded. Example trigger authentication with a bearer token kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: token-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: bearerToken 3 name: my-secret 4 key: bearerToken 5 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint. 3 Specifies the type of authentication to use. 4 Specifies the name of the secret to use. 5 Specifies the key in the token to use with the specified parameter. Example trigger authentication with an environment variable kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: env-var-triggerauthentication namespace: my-namespace 1 spec: env: 2 - parameter: access_key 3 name: ACCESS_KEY 4 containerName: my-container 5 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses environment variables for authorization when connecting to the metrics endpoint. 3 Specify the parameter to set with this variable. 4 Specify the name of the environment variable. 5 Optional: Specify a container that requires authentication. The container must be in the same resource as referenced by scaleTargetRef in the scaled object. Example trigger authentication with pod authentication providers kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: pod-id-triggerauthentication namespace: my-namespace 1 spec: podIdentity: 2 provider: aws-eks 3 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses a platform-native pod authentication when connecting to the metrics endpoint. 3 Specifies a pod identity. Supported values are none , azure , gcp , aws-eks , or aws-kiam . The default is none . Additional resources For information about OpenShift Container Platform secrets, see Providing sensitive data to pods . 3.5.1. Using trigger authentications You use trigger authentications and cluster trigger authentications by using a custom resource to create the authentication, then add a reference to a scaled object or scaled job. Prerequisites The Custom Metrics Autoscaler Operator must be installed. If you are using a secret, the Secret object must exist, for example: Example secret apiVersion: v1 kind: Secret metadata: name: my-secret data: user-name: <base64_USER_NAME> password: <base64_USER_PASSWORD> Procedure Create the TriggerAuthentication or ClusterTriggerAuthentication object. Create a YAML file that defines the object: Example trigger authentication with a secret kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: prom-triggerauthentication namespace: my-namespace spec: secretTargetRef: - parameter: user-name name: my-secret key: USER_NAME - parameter: password name: my-secret key: USER_PASSWORD Create the TriggerAuthentication object: USD oc create -f <filename>.yaml Create or edit a ScaledObject YAML file that uses the trigger authentication: Create a YAML file that defines the object by running the following command: Example scaled object with a trigger authentication apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job="test-app"}[1m])) authModes: "basic" authenticationRef: name: prom-triggerauthentication 1 kind: TriggerAuthentication 2 1 Specify the name of your trigger authentication object. 2 Specify TriggerAuthentication . TriggerAuthentication is the default. Example scaled object with a cluster trigger authentication apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job="test-app"}[1m])) authModes: "basic" authenticationRef: name: prom-cluster-triggerauthentication 1 kind: ClusterTriggerAuthentication 2 1 Specify the name of your trigger authentication object. 2 Specify ClusterTriggerAuthentication . Create the scaled object by running the following command: USD oc apply -f <filename> 3.6. Pausing the custom metrics autoscaler for a scaled object You can pause and restart the autoscaling of a workload, as needed. For example, you might want to pause autoscaling before performing cluster maintenance or to avoid resource starvation by removing non-mission-critical workloads. 3.6.1. Pausing a custom metrics autoscaler You can pause the autoscaling of a scaled object by adding the autoscaling.keda.sh/paused-replicas annotation to the custom metrics autoscaler for that scaled object. The custom metrics autoscaler scales the replicas for that workload to the specified value and pauses autoscaling until the annotation is removed. apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "4" # ... Procedure Use the following command to edit the ScaledObject CR for your workload: USD oc edit ScaledObject scaledobject Add the autoscaling.keda.sh/paused-replicas annotation with any value: apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "4" 1 creationTimestamp: "2023-02-08T14:41:01Z" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0 1 Specifies that the Custom Metrics Autoscaler Operator is to scale the replicas to the specified value and stop autoscaling. 3.6.2. Restarting the custom metrics autoscaler for a scaled object You can restart a paused custom metrics autoscaler by removing the autoscaling.keda.sh/paused-replicas annotation for that ScaledObject . apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "4" # ... Procedure Use the following command to edit the ScaledObject CR for your workload: USD oc edit ScaledObject scaledobject Remove the autoscaling.keda.sh/paused-replicas annotation. apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "4" 1 creationTimestamp: "2023-02-08T14:41:01Z" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0 1 Remove this annotation to restart a paused custom metrics autoscaler. 3.7. Gathering audit logs You can gather audit logs, which are a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system. For example, audit logs can help you understand where an autoscaling request is coming from. This is key information when backends are getting overloaded by autoscaling requests made by user applications and you need to determine which is the troublesome application. 3.7.1. Configuring audit logging You can configure auditing for the Custom Metrics Autoscaler Operator by editing the KedaController custom resource. The logs are sent to an audit log file on a volume that is secured by using a persistent volume claim in the KedaController CR. Prerequisites The Custom Metrics Autoscaler Operator must be installed. Procedure Edit the KedaController custom resource to add the auditConfig stanza: kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: # ... metricsServer: # ... auditConfig: logFormat: "json" 1 logOutputVolumeClaim: "pvc-audit-log" 2 policy: rules: 3 - level: Metadata omitStages: "RequestReceived" 4 omitManagedFields: false 5 lifetime: 6 maxAge: "2" maxBackup: "1" maxSize: "50" 1 Specifies the output format of the audit log, either legacy or json . 2 Specifies an existing persistent volume claim for storing the log data. All requests coming to the API server are logged to this persistent volume claim. If you leave this field empty, the log data is sent to stdout. 3 Specifies which events should be recorded and what data they should include: None : Do not log events. Metadata : Log only the metadata for the request, such as user, timestamp, and so forth. Do not log the request text and the response text. This is the default. Request : Log only the metadata and the request text but not the response text. This option does not apply for non-resource requests. RequestResponse : Log event metadata, request text, and response text. This option does not apply for non-resource requests. 4 Specifies stages for which no event is created. 5 Specifies whether to omit the managed fields of the request and response bodies from being written to the API audit log, either true to omit the fields or false to include the fields. 6 Specifies the size and lifespan of the audit logs. maxAge : The maximum number of days to retain audit log files, based on the timestamp encoded in their filename. maxBackup : The maximum number of audit log files to retain. Set to 0 to retain all audit log files. maxSize : The maximum size in megabytes of an audit log file before it gets rotated. Verification View the audit log file directly: Obtain the name of the keda-metrics-apiserver-* pod: oc get pod -n openshift-keda Example output NAME READY STATUS RESTARTS AGE custom-metrics-autoscaler-operator-5cb44cd75d-9v4lv 1/1 Running 0 8m20s keda-metrics-apiserver-65c7cc44fd-rrl4r 1/1 Running 0 2m55s keda-operator-776cbb6768-zpj5b 1/1 Running 0 2m55s View the log data by using a command similar to the following: USD oc logs keda-metrics-apiserver-<hash>|grep -i metadata 1 1 Optional: You can use the grep command to specify the log level to display: Metadata , Request , RequestResponse . For example: USD oc logs keda-metrics-apiserver-65c7cc44fd-rrl4r|grep -i metadata Example output ... {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"4c81d41b-3dab-4675-90ce-20b87ce24013","stage":"ResponseComplete","requestURI":"/healthz","verb":"get","user":{"username":"system:anonymous","groups":["system:unauthenticated"]},"sourceIPs":["10.131.0.1"],"userAgent":"kube-probe/1.26","responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2023-02-16T13:00:03.554567Z","stageTimestamp":"2023-02-16T13:00:03.555032Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}} ... Alternatively, you can view a specific log: Use a command similar to the following to log into the keda-metrics-apiserver-* pod: USD oc rsh pod/keda-metrics-apiserver-<hash> -n openshift-keda For example: USD oc rsh pod/keda-metrics-apiserver-65c7cc44fd-rrl4r -n openshift-keda Change to the /var/audit-policy/ directory: sh-4.4USD cd /var/audit-policy/ List the available logs: sh-4.4USD ls Example output log-2023.02.17-14:50 policy.yaml View the log, as needed: sh-4.4USD cat <log_name>/<pvc_name>|grep -i <log_level> 1 1 Optional: You can use the grep command to specify the log level to display: Metadata , Request , RequestResponse . For example: sh-4.4USD cat log-2023.02.17-14:50/pvc-audit-log|grep -i Request Example output 3.8. Gathering debugging data When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. To help troubleshoot your issue, provide the following information: Data gathered using the must-gather tool. The unique cluster ID. You can use the must-gather tool to collect data about the Custom Metrics Autoscaler Operator and its components, including the following items: The openshift-keda namespace and its child objects. The Custom Metric Autoscaler Operator installation objects. The Custom Metric Autoscaler Operator CRD objects. 3.8.1. Gathering debugging data The following command runs the must-gather tool for the Custom Metrics Autoscaler Operator: USD oc adm must-gather --image="USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator \ -n openshift-marketplace \ -o jsonpath='{.status.channels[?(@.name=="stable")].currentCSVDesc.annotations.containerImage}')" Note The standard OpenShift Container Platform must-gather command, oc adm must-gather , does not collect Custom Metrics Autoscaler Operator data. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. The OpenShift Container Platform CLI ( oc ) installed. Procedure Navigate to the directory where you want to store the must-gather data. Note If your cluster is using a restricted network, you must take additional steps. If your mirror registry has a trusted CA, you must first add the trusted CA to the cluster. For all clusters on restricted networks, you must import the default must-gather image as an image stream by running the following command. USD oc import-image is/must-gather -n openshift Perform one of the following: To get only the Custom Metrics Autoscaler Operator must-gather data, use the following command: USD oc adm must-gather --image="USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator \ -n openshift-marketplace \ -o jsonpath='{.status.channels[?(@.name=="stable")].currentCSVDesc.annotations.containerImage}')" The custom image for the must-gather command is pulled directly from the Operator package manifests, so that it works on any cluster where the Custom Metric Autoscaler Operator is available. To gather the default must-gather data in addition to the Custom Metric Autoscaler Operator information: Use the following command to obtain the Custom Metrics Autoscaler Operator image and set it as an environment variable: USD IMAGE="USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator \ -n openshift-marketplace \ -o jsonpath='{.status.channels[?(@.name=="stable")].currentCSVDesc.annotations.containerImage}')" Use the oc adm must-gather with the Custom Metrics Autoscaler Operator image: USD oc adm must-gather --image-stream=openshift/must-gather --image=USD{IMAGE} Example 3.1. Example must-gather output for the Custom Metric Autoscaler: └── openshift-keda ├── apps │ ├── daemonsets.yaml │ ├── deployments.yaml │ ├── replicasets.yaml │ └── statefulsets.yaml ├── apps.openshift.io │ └── deploymentconfigs.yaml ├── autoscaling │ └── horizontalpodautoscalers.yaml ├── batch │ ├── cronjobs.yaml │ └── jobs.yaml ├── build.openshift.io │ ├── buildconfigs.yaml │ └── builds.yaml ├── core │ ├── configmaps.yaml │ ├── endpoints.yaml │ ├── events.yaml │ ├── persistentvolumeclaims.yaml │ ├── pods.yaml │ ├── replicationcontrollers.yaml │ ├── secrets.yaml │ └── services.yaml ├── discovery.k8s.io │ └── endpointslices.yaml ├── image.openshift.io │ └── imagestreams.yaml ├── k8s.ovn.org │ ├── egressfirewalls.yaml │ └── egressqoses.yaml ├── keda.sh │ ├── kedacontrollers │ │ └── keda.yaml │ ├── scaledobjects │ │ └── example-scaledobject.yaml │ └── triggerauthentications │ └── example-triggerauthentication.yaml ├── monitoring.coreos.com │ └── servicemonitors.yaml ├── networking.k8s.io │ └── networkpolicies.yaml ├── openshift-keda.yaml ├── pods │ ├── custom-metrics-autoscaler-operator-58bd9f458-ptgwx │ │ ├── custom-metrics-autoscaler-operator │ │ │ └── custom-metrics-autoscaler-operator │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── .insecure.log │ │ │ └── .log │ │ └── custom-metrics-autoscaler-operator-58bd9f458-ptgwx.yaml │ ├── custom-metrics-autoscaler-operator-58bd9f458-thbsh │ │ └── custom-metrics-autoscaler-operator │ │ └── custom-metrics-autoscaler-operator │ │ └── logs │ ├── keda-metrics-apiserver-65c7cc44fd-6wq4g │ │ ├── keda-metrics-apiserver │ │ │ └── keda-metrics-apiserver │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── .insecure.log │ │ │ └── .log │ │ └── keda-metrics-apiserver-65c7cc44fd-6wq4g.yaml │ └── keda-operator-776cbb6768-fb6m5 │ ├── keda-operator │ │ └── keda-operator │ │ └── logs │ │ ├── current.log │ │ ├── .insecure.log │ │ └── .log │ └── keda-operator-776cbb6768-fb6m5.yaml ├── policy │ └── poddisruptionbudgets.yaml └── route.openshift.io └── routes.yaml Create a compressed file from the must-gather directory that was created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1 1 Replace must-gather-local.5421342344627712289/ with the actual directory name. Attach the compressed file to your support case on the Red Hat Customer Portal . 3.9. Viewing Operator metrics The Custom Metrics Autoscaler Operator exposes ready-to-use metrics that it pulls from the on-cluster monitoring component. You can query the metrics by using the Prometheus Query Language (PromQL) to analyze and diagnose issues. All metrics are reset when the controller pod restarts. 3.9.1. Accessing performance metrics You can access the metrics and run queries by using the OpenShift Container Platform web console. Procedure Select the Administrator perspective in the OpenShift Container Platform web console. Select Observe Metrics . To create a custom query, add your PromQL query to the Expression field. To add multiple queries, select Add Query . 3.9.1.1. Provided Operator metrics The Custom Metrics Autoscaler Operator exposes the following metrics, which you can view by using the OpenShift Container Platform web console. Table 3.1. Custom Metric Autoscaler Operator metrics Metric name Description keda_scaler_activity Whether the particular scaler is active or inactive. A value of 1 indicates the scaler is active; a value of 0 indicates the scaler is inactive. keda_scaler_metrics_value The current value for each scaler's metric, which is used by the Horizontal Pod Autoscaler (HPA) in computing the target average. keda_scaler_metrics_latency The latency of retrieving the current metric from each scaler. keda_scaler_errors The number of errors that have occurred for each scaler. keda_scaler_errors_total The total number of errors encountered for all scalers. keda_scaled_object_errors The number of errors that have occurred for each scaled obejct. keda_resource_totals The total number of Custom Metrics Autoscaler custom resources in each namespace for each custom resource type. keda_trigger_totals The total number of triggers by trigger type. Custom Metrics Autoscaler Admission webhook metrics The Custom Metrics Autoscaler Admission webhook also exposes the following Prometheus metrics. Metric name Description keda_scaled_object_validation_total The number of scaled object validations. keda_scaled_object_validation_errors The number of validation errors. 3.10. Understanding how to add custom metrics autoscalers To add a custom metrics autoscaler, create a ScaledObject custom resource for a deployment, stateful set, or custom resource. Create a ScaledJob custom resource for a job. You can create only one scaled object for each workload that you want to scale. Also, you cannot use a scaled object and the horizontal pod autoscaler (HPA) on the same workload. 3.10.1. Adding a custom metrics autoscaler to a workload You can create a custom metrics autoscaler for a workload that is created by a Deployment , StatefulSet , or custom resource object. Prerequisites The Custom Metrics Autoscaler Operator must be installed. If you use a custom metrics autoscaler for scaling based on CPU or memory: Your cluster administrator must have properly configured cluster metrics. You can use the oc describe PodMetrics <pod-name> command to determine if metrics are configured. If metrics are configured, the output appears similar to the following, with CPU and Memory displayed under Usage. USD oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Example output Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none> The pods associated with the object you want to scale must include specified memory and CPU limits. For example: Example pod spec apiVersion: v1 kind: Pod # ... spec: containers: - name: app image: images.my-company.example/app:v4 resources: limits: memory: "128Mi" cpu: "500m" # ... Procedure Create a YAML file similar to the following. Only the name <2> , object name <4> , and object kind <5> are required: Example scaled object apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "0" 1 name: scaledobject 2 namespace: my-namespace spec: scaleTargetRef: apiVersion: apps/v1 3 name: example-deployment 4 kind: Deployment 5 envSourceContainerName: .spec.template.spec.containers[0] 6 cooldownPeriod: 200 7 maxReplicaCount: 100 8 minReplicaCount: 0 9 metricsServer: 10 auditConfig: logFormat: "json" logOutputVolumeClaim: "persistentVolumeClaimName" policy: rules: - level: Metadata omitStages: "RequestReceived" omitManagedFields: false lifetime: maxAge: "2" maxBackup: "1" maxSize: "50" fallback: 11 failureThreshold: 3 replicas: 6 pollingInterval: 30 12 advanced: restoreToOriginalReplicaCount: false 13 horizontalPodAutoscalerConfig: name: keda-hpa-scale-down 14 behavior: 15 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 100 periodSeconds: 15 triggers: - type: prometheus 16 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job="test-app"}[1m])) authModes: basic authenticationRef: 17 name: prom-triggerauthentication kind: TriggerAuthentication 1 Optional: Specifies that the Custom Metrics Autoscaler Operator is to scale the replicas to the specified value and stop autoscaling, as described in the "Pausing the custom metrics autoscaler for a workload" section. 2 Specifies a name for this custom metrics autoscaler. 3 Optional: Specifies the API version of the target resource. The default is apps/v1 . 4 Specifies the name of the object that you want to scale. 5 Specifies the kind as Deployment , StatefulSet or CustomResource . 6 Optional: Specifies the name of the container in the target resource, from which the custom metrics autoscaler gets environment variables holding secrets and so forth. The default is .spec.template.spec.containers[0] . 7 Optional. Specifies the period in seconds to wait after the last trigger is reported before scaling the deployment back to 0 if the minReplicaCount is set to 0 . The default is 300 . 8 Optional: Specifies the maximum number of replicas when scaling up. The default is 100 . 9 Optional: Specifies the minimum number of replicas when scaling down. 10 Optional: Specifies the parameters for audit logs. as described in the "Configuring audit logging" section. 11 Optional: Specifies the number of replicas to fall back to if a scaler fails to get metrics from the source for the number of times defined by the failureThreshold parameter. For more information on fallback behavior, see the KEDA documentation . 12 Optional: Specifies the interval in seconds to check each trigger on. The default is 30 . 13 Optional: Specifies whether to scale back the target resource to the original replica count after the scaled object is deleted. The default is false , which keeps the replica count as it is when the scaled object is deleted. 14 Optional: Specifies a name for the horizontal pod autoscaler. The default is keda-hpa-{scaled-object-name} . 15 Optional: Specifies a scaling policy to use to control the rate to scale pods up or down, as described in the "Scaling policies" section. 16 Specifies the trigger to use as the basis for scaling, as described in the "Understanding the custom metrics autoscaler triggers" section. This example uses OpenShift Container Platform monitoring. 17 Optional: Specifies a trigger authentication or a cluster trigger authentication. For more information, see Understanding the custom metrics autoscaler trigger authentication in the Additional resources section. Enter TriggerAuthentication to use a trigger authentication. This is the default. Enter ClusterTriggerAuthentication to use a cluster trigger authentication. Create the custom metrics autoscaler by running the following command: USD oc create -f <filename>.yaml Verification View the command output to verify that the custom metrics autoscaler was created: USD oc get scaledobject <scaled_object_name> Example output NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE scaledobject apps/v1.Deployment example-deployment 0 50 prometheus prom-triggerauthentication True True True 17s Note the following fields in the output: TRIGGERS : Indicates the trigger, or scaler, that is being used. AUTHENTICATION : Indicates the name of any trigger authentication being used. READY : Indicates whether the scaled object is ready to start scaling: If True , the scaled object is ready. If False , the scaled object is not ready because of a problem in one or more of the objects you created. ACTIVE : Indicates whether scaling is taking place: If True , scaling is taking place. If False , scaling is not taking place because there are no metrics or there is a problem in one or more of the objects you created. FALLBACK : Indicates whether the custom metrics autoscaler is able to get metrics from the source If False , the custom metrics autoscaler is getting metrics. If True , the custom metrics autoscaler is getting metrics because there are no metrics or there is a problem in one or more of the objects you created. 3.10.2. Adding a custom metrics autoscaler to a job You can create a custom metrics autoscaler for any Job object. Important Scaling by using a scaled job is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites The Custom Metrics Autoscaler Operator must be installed. Procedure Create a YAML file similar to the following: kind: ScaledJob apiVersion: keda.sh/v1alpha1 metadata: name: scaledjob namespace: my-namespace spec: failedJobsHistoryLimit: 5 jobTargetRef: activeDeadlineSeconds: 600 1 backoffLimit: 6 2 parallelism: 1 3 completions: 1 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] maxReplicaCount: 100 6 pollingInterval: 30 7 successfulJobsHistoryLimit: 5 8 failedJobsHistoryLimit: 5 9 envSourceContainerName: 10 rolloutStrategy: gradual 11 scalingStrategy: 12 strategy: "custom" customScalingQueueLengthDeduction: 1 customScalingRunningJobPercentage: "0.5" pendingPodConditions: - "Ready" - "PodScheduled" - "AnyOtherCustomPodCondition" multipleScalersCalculation : "max" triggers: - type: prometheus 13 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job="test-app"}[1m])) authModes: "bearer" authenticationRef: 14 name: prom-cluster-triggerauthentication 1 Specifies the maximum duration the job can run. 2 Specifies the number of retries for a job. The default is 6 . 3 Optional: Specifies how many pod replicas a job should run in parallel; defaults to 1 . For non-parallel jobs, leave unset. When unset, the default is 1 . 4 Optional: Specifies how many successful pod completions are needed to mark a job completed. For non-parallel jobs, leave unset. When unset, the default is 1 . For parallel jobs with a fixed completion count, specify the number of completions. For parallel jobs with a work queue, leave unset. When unset the default is the value of the parallelism parameter. 5 Specifies the template for the pod the controller creates. 6 Optional: Specifies the maximum number of replicas when scaling up. The default is 100 . 7 Optional: Specifies the interval in seconds to check each trigger on. The default is 30 . 8 Optional: Specifies the number of successful finished jobs should be kept. The default is 100 . 9 Optional: Specifies how many failed jobs should be kept. The default is 100 . 10 Optional: Specifies the name of the container in the target resource, from which the custom autoscaler gets environment variables holding secrets and so forth. The default is .spec.template.spec.containers[0] . 11 Optional: Specifies whether existing jobs are terminated whenever a scaled job is being updated: default : The autoscaler terminates an existing job if its associated scaled job is updated. The autoscaler recreates the job with the latest specs. gradual : The autoscaler does not terminate an existing job if its associated scaled job is updated. The autoscaler creates new jobs with the latest specs. 12 Optional: Specifies a scaling strategy: default , custom , or accurate . The default is default . For more information, see the link in the "Additional resources" section that follows. 13 Specifies the trigger to use as the basis for scaling, as described in the "Understanding the custom metrics autoscaler triggers" section. 14 Optional: Specifies a trigger authentication or a cluster trigger authentication. For more information, see Understanding the custom metrics autoscaler trigger authentication in the Additional resources section. Enter TriggerAuthentication to use a trigger authentication. This is the default. Enter ClusterTriggerAuthentication to use a cluster trigger authentication. Create the custom metrics autoscaler by running the following command: USD oc create -f <filename>.yaml Verification View the command output to verify that the custom metrics autoscaler was created: USD oc get scaledjob <scaled_job_name> Example output NAME MAX TRIGGERS AUTHENTICATION READY ACTIVE AGE scaledjob 100 prometheus prom-triggerauthentication True True 8s Note the following fields in the output: TRIGGERS : Indicates the trigger, or scaler, that is being used. AUTHENTICATION : Indicates the name of any trigger authentication being used. READY : Indicates whether the scaled object is ready to start scaling: If True , the scaled object is ready. If False , the scaled object is not ready because of a problem in one or more of the objects you created. ACTIVE : Indicates whether scaling is taking place: If True , scaling is taking place. If False , scaling is not taking place because there are no metrics or there is a problem in one or more of the objects you created. 3.10.3. Additional resources Understanding custom metrics autoscaler trigger authentications 3.11. Removing the Custom Metrics Autoscaler Operator You can remove the custom metrics autoscaler from your OpenShift Container Platform cluster. After removing the Custom Metrics Autoscaler Operator, remove other components associated with the Operator to avoid potential issues. Note Delete the KedaController custom resource (CR) first. If you do not delete the KedaController CR, OpenShift Container Platform can hang when you delete the openshift-keda project. If you delete the Custom Metrics Autoscaler Operator before deleting the CR, you are not able to delete the CR. 3.11.1. Uninstalling the Custom Metrics Autoscaler Operator Use the following procedure to remove the custom metrics autoscaler from your OpenShift Container Platform cluster. Prerequisites The Custom Metrics Autoscaler Operator must be installed. Procedure In the OpenShift Container Platform web console, click Operators Installed Operators . Switch to the openshift-keda project. Remove the KedaController custom resource. Find the CustomMetricsAutoscaler Operator and click the KedaController tab. Find the custom resource, and then click Delete KedaController . Click Uninstall . Remove the Custom Metrics Autoscaler Operator: Click Operators Installed Operators . Find the CustomMetricsAutoscaler Operator and click the Options menu and select Uninstall Operator . Click Uninstall . Optional: Use the OpenShift CLI to remove the custom metrics autoscaler components: Delete the custom metrics autoscaler CRDs: clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh USD oc delete crd clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh Deleting the CRDs removes the associated roles, cluster roles, and role bindings. However, there might be a few cluster roles that must be manually deleted. List any custom metrics autoscaler cluster roles: USD oc get clusterrole | grep keda.sh Delete the listed custom metrics autoscaler cluster roles. For example: USD oc delete clusterrole.keda.sh-v1alpha1-admin List any custom metrics autoscaler cluster role bindings: USD oc get clusterrolebinding | grep keda.sh Delete the listed custom metrics autoscaler cluster role bindings. For example: USD oc delete clusterrolebinding.keda.sh-v1alpha1-admin Delete the custom metrics autoscaler project: USD oc delete project openshift-keda Delete the Cluster Metric Autoscaler Operator: USD oc delete operator/openshift-custom-metrics-autoscaler-operator.openshift-keda
|
[
"oc delete crd scaledobjects.keda.k8s.io",
"oc delete crd triggerauthentications.keda.k8s.io",
"oc create configmap -n openshift-keda thanos-cert --from-file=ca-cert.pem",
"oc get all -n openshift-keda",
"NAME READY STATUS RESTARTS AGE pod/custom-metrics-autoscaler-operator-5fd8d9ffd8-xt4xp 1/1 Running 0 18m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/custom-metrics-autoscaler-operator 1/1 1 1 18m NAME DESIRED CURRENT READY AGE replicaset.apps/custom-metrics-autoscaler-operator-5fd8d9ffd8 1 1 1 18m",
"kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: watchNamespace: '' 1 operator: logLevel: info 2 logEncoder: console 3 caConfigMaps: 4 - thanos-cert - kafka-cert metricsServer: logLevel: '0' 5 auditConfig: 6 logFormat: \"json\" logOutputVolumeClaim: \"persistentVolumeClaimName\" policy: rules: - level: Metadata omitStages: [\"RequestReceived\"] omitManagedFields: false lifetime: maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\" serviceAccount: {}",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: prom-scaledobject namespace: my-namespace spec: triggers: - type: prometheus 1 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 2 namespace: kedatest 3 metricName: http_requests_total 4 threshold: '5' 5 query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) 6 authModes: basic 7 cortexOrgID: my-org 8 ignoreNullValues: \"false\" 9 unsafeSsl: \"false\" 10",
"oc project my-project",
"oc create serviceaccount <service_account>",
"oc describe serviceaccount <service_account>",
"Name: thanos Namespace: my-project Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-nnwgj Mountable secrets: thanos-dockercfg-nnwgj Tokens: thanos-token-9g4n5 1 Events: <none>",
"apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: keda-trigger-auth-prometheus spec: secretTargetRef: 1 - parameter: bearerToken 2 name: thanos-token-9g4n5 3 key: token 4 - parameter: ca name: thanos-token-9g4n5 key: ca.crt",
"oc create -f <file-name>.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader rules: - apiGroups: - \"\" resources: - pods verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch",
"oc create -f <file-name>.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: thanos-metrics-reader 1 namespace: my-project 2 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: thanos-metrics-reader subjects: - kind: ServiceAccount name: thanos 3 namespace: my-project 4",
"oc create -f <file-name>.yaml",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cpu-scaledobject namespace: my-namespace spec: triggers: - type: cpu 1 metricType: Utilization 2 metadata: value: '60' 3 minReplicaCount: 1 4",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: memory-scaledobject namespace: my-namespace spec: triggers: - type: memory 1 metricType: Utilization 2 metadata: value: '60' 3 containerName: api 4",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: kafka-scaledobject namespace: my-namespace spec: triggers: - type: kafka 1 metadata: topic: my-topic 2 bootstrapServers: my-cluster-kafka-bootstrap.openshift-operators.svc:9092 3 consumerGroup: my-group 4 lagThreshold: '10' 5 activationLagThreshold: '5' 6 offsetResetPolicy: latest 7 allowIdleConsumers: true 8 scaleToZeroOnInvalidOffset: false 9 excludePersistentLag: false 10 version: '1.0.0' 11 partitionLimitation: '1,2,10-20,31' 12 tls: enable 13",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cron-scaledobject namespace: default spec: scaleTargetRef: name: my-deployment minReplicaCount: 0 1 maxReplicaCount: 100 2 cooldownPeriod: 300 triggers: - type: cron 3 metadata: timezone: Asia/Kolkata 4 start: \"0 6 * * *\" 5 end: \"30 18 * * *\" 6 desiredReplicas: \"100\" 7",
"apiVersion: v1 kind: Secret metadata: name: my-basic-secret namespace: default data: username: \"dXNlcm5hbWU=\" 1 password: \"cGFzc3dvcmQ=\"",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password",
"kind: ClusterTriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: 1 name: secret-cluster-triggerauthentication spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password",
"apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0... 1 client-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... 2 client-key.pem: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0t",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: key 3 name: my-secret 4 key: client-key.pem 5 - parameter: ca 6 name: my-secret 7 key: ca-cert.pem 8",
"apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: bearerToken: \"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXV\" 1",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: token-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: bearerToken 3 name: my-secret 4 key: bearerToken 5",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: env-var-triggerauthentication namespace: my-namespace 1 spec: env: 2 - parameter: access_key 3 name: ACCESS_KEY 4 containerName: my-container 5",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: pod-id-triggerauthentication namespace: my-namespace 1 spec: podIdentity: 2 provider: aws-eks 3",
"apiVersion: v1 kind: Secret metadata: name: my-secret data: user-name: <base64_USER_NAME> password: <base64_USER_PASSWORD>",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: prom-triggerauthentication namespace: my-namespace spec: secretTargetRef: - parameter: user-name name: my-secret key: USER_NAME - parameter: password name: my-secret key: USER_PASSWORD",
"oc create -f <filename>.yaml",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"basic\" authenticationRef: name: prom-triggerauthentication 1 kind: TriggerAuthentication 2",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"basic\" authenticationRef: name: prom-cluster-triggerauthentication 1 kind: ClusterTriggerAuthentication 2",
"oc apply -f <filename>",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\"",
"oc edit ScaledObject scaledobject",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\" 1 creationTimestamp: \"2023-02-08T14:41:01Z\" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\"",
"oc edit ScaledObject scaledobject",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\" 1 creationTimestamp: \"2023-02-08T14:41:01Z\" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0",
"kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: metricsServer: auditConfig: logFormat: \"json\" 1 logOutputVolumeClaim: \"pvc-audit-log\" 2 policy: rules: 3 - level: Metadata omitStages: \"RequestReceived\" 4 omitManagedFields: false 5 lifetime: 6 maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\"",
"get pod -n openshift-keda",
"NAME READY STATUS RESTARTS AGE custom-metrics-autoscaler-operator-5cb44cd75d-9v4lv 1/1 Running 0 8m20s keda-metrics-apiserver-65c7cc44fd-rrl4r 1/1 Running 0 2m55s keda-operator-776cbb6768-zpj5b 1/1 Running 0 2m55s",
"oc logs keda-metrics-apiserver-<hash>|grep -i metadata 1",
"oc logs keda-metrics-apiserver-65c7cc44fd-rrl4r|grep -i metadata",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"4c81d41b-3dab-4675-90ce-20b87ce24013\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/healthz\",\"verb\":\"get\",\"user\":{\"username\":\"system:anonymous\",\"groups\":[\"system:unauthenticated\"]},\"sourceIPs\":[\"10.131.0.1\"],\"userAgent\":\"kube-probe/1.26\",\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2023-02-16T13:00:03.554567Z\",\"stageTimestamp\":\"2023-02-16T13:00:03.555032Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"\"}}",
"oc rsh pod/keda-metrics-apiserver-<hash> -n openshift-keda",
"oc rsh pod/keda-metrics-apiserver-65c7cc44fd-rrl4r -n openshift-keda",
"sh-4.4USD cd /var/audit-policy/",
"sh-4.4USD ls",
"log-2023.02.17-14:50 policy.yaml",
"sh-4.4USD cat <log_name>/<pvc_name>|grep -i <log_level> 1",
"sh-4.4USD cat log-2023.02.17-14:50/pvc-audit-log|grep -i Request",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Request\",\"auditID\":\"63e7f68c-04ec-4f4d-8749-bf1656572a41\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/openapi/v2\",\"verb\":\"get\",\"user\":{\"username\":\"system:aggregator\",\"groups\":[\"system:authenticated\"]},\"sourceIPs\":[\"10.128.0.1\"],\"responseStatus\":{\"metadata\":{},\"code\":304},\"requestReceivedTimestamp\":\"2023-02-17T13:12:55.035478Z\",\"stageTimestamp\":\"2023-02-17T13:12:55.038346Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"system:discovery\\\" of ClusterRole \\\"system:discovery\\\" to Group \\\"system:authenticated\\\"\"}}",
"oc adm must-gather --image=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"",
"oc import-image is/must-gather -n openshift",
"oc adm must-gather --image=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"",
"IMAGE=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"",
"oc adm must-gather --image-stream=openshift/must-gather --image=USD{IMAGE}",
"└── openshift-keda ├── apps │ ├── daemonsets.yaml │ ├── deployments.yaml │ ├── replicasets.yaml │ └── statefulsets.yaml ├── apps.openshift.io │ └── deploymentconfigs.yaml ├── autoscaling │ └── horizontalpodautoscalers.yaml ├── batch │ ├── cronjobs.yaml │ └── jobs.yaml ├── build.openshift.io │ ├── buildconfigs.yaml │ └── builds.yaml ├── core │ ├── configmaps.yaml │ ├── endpoints.yaml │ ├── events.yaml │ ├── persistentvolumeclaims.yaml │ ├── pods.yaml │ ├── replicationcontrollers.yaml │ ├── secrets.yaml │ └── services.yaml ├── discovery.k8s.io │ └── endpointslices.yaml ├── image.openshift.io │ └── imagestreams.yaml ├── k8s.ovn.org │ ├── egressfirewalls.yaml │ └── egressqoses.yaml ├── keda.sh │ ├── kedacontrollers │ │ └── keda.yaml │ ├── scaledobjects │ │ └── example-scaledobject.yaml │ └── triggerauthentications │ └── example-triggerauthentication.yaml ├── monitoring.coreos.com │ └── servicemonitors.yaml ├── networking.k8s.io │ └── networkpolicies.yaml ├── openshift-keda.yaml ├── pods │ ├── custom-metrics-autoscaler-operator-58bd9f458-ptgwx │ │ ├── custom-metrics-autoscaler-operator │ │ │ └── custom-metrics-autoscaler-operator │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ └── custom-metrics-autoscaler-operator-58bd9f458-ptgwx.yaml │ ├── custom-metrics-autoscaler-operator-58bd9f458-thbsh │ │ └── custom-metrics-autoscaler-operator │ │ └── custom-metrics-autoscaler-operator │ │ └── logs │ ├── keda-metrics-apiserver-65c7cc44fd-6wq4g │ │ ├── keda-metrics-apiserver │ │ │ └── keda-metrics-apiserver │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ └── keda-metrics-apiserver-65c7cc44fd-6wq4g.yaml │ └── keda-operator-776cbb6768-fb6m5 │ ├── keda-operator │ │ └── keda-operator │ │ └── logs │ │ ├── current.log │ │ ├── previous.insecure.log │ │ └── previous.log │ └── keda-operator-776cbb6768-fb6m5.yaml ├── policy │ └── poddisruptionbudgets.yaml └── route.openshift.io └── routes.yaml",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1",
"oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal",
"Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>",
"apiVersion: v1 kind: Pod spec: containers: - name: app image: images.my-company.example/app:v4 resources: limits: memory: \"128Mi\" cpu: \"500m\"",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"0\" 1 name: scaledobject 2 namespace: my-namespace spec: scaleTargetRef: apiVersion: apps/v1 3 name: example-deployment 4 kind: Deployment 5 envSourceContainerName: .spec.template.spec.containers[0] 6 cooldownPeriod: 200 7 maxReplicaCount: 100 8 minReplicaCount: 0 9 metricsServer: 10 auditConfig: logFormat: \"json\" logOutputVolumeClaim: \"persistentVolumeClaimName\" policy: rules: - level: Metadata omitStages: \"RequestReceived\" omitManagedFields: false lifetime: maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\" fallback: 11 failureThreshold: 3 replicas: 6 pollingInterval: 30 12 advanced: restoreToOriginalReplicaCount: false 13 horizontalPodAutoscalerConfig: name: keda-hpa-scale-down 14 behavior: 15 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 100 periodSeconds: 15 triggers: - type: prometheus 16 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: basic authenticationRef: 17 name: prom-triggerauthentication kind: TriggerAuthentication",
"oc create -f <filename>.yaml",
"oc get scaledobject <scaled_object_name>",
"NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE scaledobject apps/v1.Deployment example-deployment 0 50 prometheus prom-triggerauthentication True True True 17s",
"kind: ScaledJob apiVersion: keda.sh/v1alpha1 metadata: name: scaledjob namespace: my-namespace spec: failedJobsHistoryLimit: 5 jobTargetRef: activeDeadlineSeconds: 600 1 backoffLimit: 6 2 parallelism: 1 3 completions: 1 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] maxReplicaCount: 100 6 pollingInterval: 30 7 successfulJobsHistoryLimit: 5 8 failedJobsHistoryLimit: 5 9 envSourceContainerName: 10 rolloutStrategy: gradual 11 scalingStrategy: 12 strategy: \"custom\" customScalingQueueLengthDeduction: 1 customScalingRunningJobPercentage: \"0.5\" pendingPodConditions: - \"Ready\" - \"PodScheduled\" - \"AnyOtherCustomPodCondition\" multipleScalersCalculation : \"max\" triggers: - type: prometheus 13 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"bearer\" authenticationRef: 14 name: prom-cluster-triggerauthentication",
"oc create -f <filename>.yaml",
"oc get scaledjob <scaled_job_name>",
"NAME MAX TRIGGERS AUTHENTICATION READY ACTIVE AGE scaledjob 100 prometheus prom-triggerauthentication True True 8s",
"oc delete crd clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh",
"oc get clusterrole | grep keda.sh",
"oc delete clusterrole.keda.sh-v1alpha1-admin",
"oc get clusterrolebinding | grep keda.sh",
"oc delete clusterrolebinding.keda.sh-v1alpha1-admin",
"oc delete project openshift-keda",
"oc delete operator/openshift-custom-metrics-autoscaler-operator.openshift-keda"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/nodes/automatically-scaling-pods-with-the-custom-metrics-autoscaler-operator
|
Chapter 4. Using SSL to protect connections to Red Hat Quay
|
Chapter 4. Using SSL to protect connections to Red Hat Quay 4.1. Using SSL/TLS To configure Red Hat Quay with a self-signed certificate, you must create a Certificate Authority (CA) and a primary key file named ssl.cert and ssl.key . 4.2. Creating a Certificate Authority Use the following procedure to set up your own CA and use it to issue a server certificate for your domain. This allows you to secure communications with SSL/TLS using your own certificates. Procedure Generate the root CA key by entering the following command: USD openssl genrsa -out rootCA.key 2048 Generate the root CA certificate by entering the following command: USD openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem Enter the information that will be incorporated into your certificate request, including the server hostname, for example: Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Generate the server key by entering the following command: USD openssl genrsa -out ssl.key 2048 Generate a signing request by entering the following command: USD openssl req -new -key ssl.key -out ssl.csr Enter the information that will be incorporated into your certificate request, including the server hostname, for example: Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Email Address []: Create a configuration file openssl.cnf , specifying the server hostname, for example: Example openssl.cnf file [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = <quay-server.example.com> IP.1 = 192.168.1.112 Use the configuration file to generate the certificate ssl.cert : USD openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf Confirm your created certificates and files by entering the following command: USD ls /path/to/certificates Example output rootCA.key ssl-bundle.cert ssl.key custom-ssl-config-bundle-secret.yaml rootCA.pem ssl.cert openssl.cnf rootCA.srl ssl.csr 4.3. Configuring custom SSL/TLS certificates by using the command line interface SSL/TLS must be configured by using the command-line interface (CLI) and updating your config.yaml file manually. Prerequisites You have created a certificate authority and signed the certificate. Procedure Copy the certificate file and primary key file to your configuration directory, ensuring they are named ssl.cert and ssl.key respectively: cp ~/ssl.cert ~/ssl.key /path/to/configuration_directory Navigate to the configuration directory by entering the following command: USD cd /path/to/configuration_directory Edit the config.yaml file and specify that you want Red Hat Quay to handle SSL/TLS: Example config.yaml file # ... SERVER_HOSTNAME: <quay-server.example.com> ... PREFERRED_URL_SCHEME: https # ... Optional: Append the contents of the rootCA.pem file to the end of the ssl.cert file by entering the following command: USD cat rootCA.pem >> ssl.cert Stop the Quay container by entering the following command: USD sudo podman stop <quay_container_name> Restart the registry by entering the following command: 4.4. Configuring SSL/TLS using the Red Hat Quay UI Use the following procedure to configure SSL/TLS using the Red Hat Quay UI. To configure SSL/TLS using the command line interface, see "Configuring SSL/TLS using the command line interface". Prerequisites You have created a certificate authority and signed a certificate. Procedure Start the Quay container in configuration mode: In the Server Configuration section, select Red Hat Quay handles TLS for SSL/TLS. Upload the certificate file and private key file created earlier, ensuring that the Server Hostname matches the value used when the certificates were created. Validate and download the updated configuration. Stop the Quay container and then restart the registry by entering the following command: 4.5. Testing the SSL/TLS configuration using the CLI Your SSL/TLS configuration can be tested by using the command-line interface (CLI). Use the following procedure to test your SSL/TLS configuration. Use the following procedure to test your SSL/TLS configuration using the CLI. Procedure Enter the following command to attempt to log in to the Red Hat Quay registry with SSL/TLS enabled: USD sudo podman login quay-server.example.com Example output Error: error authenticating creds for "quay-server.example.com": error pinging docker registry quay-server.example.com: Get "https://quay-server.example.com/v2/": x509: certificate signed by unknown authority Because Podman does not trust self-signed certificates, you must use the --tls-verify=false option: USD sudo podman login --tls-verify=false quay-server.example.com Example output Login Succeeded! In a subsequent section, you will configure Podman to trust the root Certificate Authority. 4.6. Testing the SSL/TLS configuration using a browser Use the following procedure to test your SSL/TLS configuration using a browser. Procedure Navigate to your Red Hat Quay registry endpoint, for example, https://quay-server.example.com . If configured correctly, the browser warns of the potential risk: Proceed to the log in screen. The browser notifies you that the connection is not secure. For example: In the following section, you will configure Podman to trust the root Certificate Authority. 4.7. Configuring Podman to trust the Certificate Authority Podman uses two paths to locate the Certificate Authority (CA) file: /etc/containers/certs.d/ and /etc/docker/certs.d/ . Use the following procedure to configure Podman to trust the CA. Procedure Copy the root CA file to one of /etc/containers/certs.d/ or /etc/docker/certs.d/ . Use the exact path determined by the server hostname, and name the file ca.crt : USD sudo cp rootCA.pem /etc/containers/certs.d/quay-server.example.com/ca.crt Verify that you no longer need to use the --tls-verify=false option when logging in to your Red Hat Quay registry: USD sudo podman login quay-server.example.com Example output Login Succeeded! 4.8. Configuring the system to trust the certificate authority Use the following procedure to configure your system to trust the certificate authority. Procedure Enter the following command to copy the rootCA.pem file to the consolidated system-wide trust store: USD sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/ Enter the following command to update the system-wide trust store configuration: USD sudo update-ca-trust extract Optional. You can use the trust list command to ensure that the Quay server has been configured: USD trust list | grep quay label: quay-server.example.com Now, when you browse to the registry at https://quay-server.example.com , the lock icon shows that the connection is secure: To remove the rootCA.pem file from system-wide trust, delete the file and update the configuration: USD sudo rm /etc/pki/ca-trust/source/anchors/rootCA.pem USD sudo update-ca-trust extract USD trust list | grep quay More information can be found in the RHEL 9 documentation in the chapter Using shared system certificates .
|
[
"openssl genrsa -out rootCA.key 2048",
"openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem",
"Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com",
"openssl genrsa -out ssl.key 2048",
"openssl req -new -key ssl.key -out ssl.csr",
"Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Email Address []:",
"[req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = <quay-server.example.com> IP.1 = 192.168.1.112",
"openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf",
"ls /path/to/certificates",
"rootCA.key ssl-bundle.cert ssl.key custom-ssl-config-bundle-secret.yaml rootCA.pem ssl.cert openssl.cnf rootCA.srl ssl.csr",
"cp ~/ssl.cert ~/ssl.key /path/to/configuration_directory",
"cd /path/to/configuration_directory",
"SERVER_HOSTNAME: <quay-server.example.com> PREFERRED_URL_SCHEME: https",
"cat rootCA.pem >> ssl.cert",
"sudo podman stop <quay_container_name>",
"sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.13.3",
"sudo podman run --rm -it --name quay_config -p 80:8080 -p 443:8443 registry.redhat.io/quay/quay-rhel8:v3.13.3 config secret",
"sudo podman rm -f quay sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.13.3",
"sudo podman login quay-server.example.com",
"Error: error authenticating creds for \"quay-server.example.com\": error pinging docker registry quay-server.example.com: Get \"https://quay-server.example.com/v2/\": x509: certificate signed by unknown authority",
"sudo podman login --tls-verify=false quay-server.example.com",
"Login Succeeded!",
"sudo cp rootCA.pem /etc/containers/certs.d/quay-server.example.com/ca.crt",
"sudo podman login quay-server.example.com",
"Login Succeeded!",
"sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/",
"sudo update-ca-trust extract",
"trust list | grep quay label: quay-server.example.com",
"sudo rm /etc/pki/ca-trust/source/anchors/rootCA.pem",
"sudo update-ca-trust extract",
"trust list | grep quay"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/manage_red_hat_quay/using-ssl-to-protect-quay
|
5.5. Hot Plugging vCPUs
|
5.5. Hot Plugging vCPUs You can hot plug vCPUs. Hot plugging means enabling or disabling devices while a virtual machine is running. Important Hot unplugging a vCPU is only supported if the vCPU was previously hot plugged. A virtual machine's vCPUs cannot be hot unplugged to less vCPUs than it was originally created with. The following prerequisites apply: The virtual machine's Operating System must be explicitly set in the New Virtual Machine or Edit Virtual Machine window. The virtual machine's operating system must support CPU hot plug. See the table below for support details. Windows virtual machines must have the guest agents installed. See Installing the Guest Agents and Drivers on Windows . Hot Plugging vCPUs Click Compute Virtual Machines and select a running virtual machine. Click Edit . Click the System tab. Change the value of Virtual Sockets as required. Click OK . Table 5.1. Operating System Support Matrix for vCPU Hot Plug Operating System Version Architecture Hot Plug Supported Hot Unplug Supported Red Hat Enterprise Linux Atomic Host 7 x86 Yes Yes Red Hat Enterprise Linux 6.3+ x86 Yes Yes Red Hat Enterprise Linux 7.0+ x86 Yes Yes Red Hat Enterprise Linux 7.3+ PPC64 Yes Yes Red Hat Enterprise Linux 8.0+ x86 Yes Yes Microsoft Windows Server 2012 R2 All x64 Yes No Microsoft Windows Server 2016 Standard, Datacenter x64 Yes No Microsoft Windows Server 2019 Standard, Datacenter x64 Yes No Microsoft Windows 8.x All x86 Yes No Microsoft Windows 8.x All x64 Yes No Microsoft Windows 10 All x86 Yes No Microsoft Windows 10 All x64 Yes No
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/CPU_hot_plug
|
2.6. Group Policy Object Access Control
|
2.6. Group Policy Object Access Control Group Policy is a Microsoft Windows feature that enables administrators to centrally manage policies for users and computers in Active Directory (AD) environments. A group policy object (GPO) is a collection of policy settings that are stored on a domain controller (DC) and can be applied to policy targets, such as computers and users. GPO policy settings related to Windows logon rights are commonly used to manage computer-based access control in AD environments. 2.6.1. How SSSD Works with GPO Access Control When you configure SSSD to apply GPO access control, SSSD retrieves GPOs applicable to host systems and AD users. Based on the retrieved GPO configuration, SSSD determines if a user is allowed to log in to a particular host. This enables the administrator to define login policies honored by both Linux and Windows clients centrally on the AD domain controller. Important Security filtering is a feature that enables you to further limit the scope of GPO access control to specific users, groups, or hosts by listing them in the security filter. However, SSSD only supports users and groups in the security filter. SSSD ignores host entries in the security filter. To ensure that SSSD applies the GPO access control to a specific system, create a new OU in the AD domain, move the system to the OU, and then link the GPO to this OU. 2.6.2. GPO Settings Supported by SSSD Table 2.2. GPO access control options retrieved by SSSD GPO option [a] Corresponding sssd.conf option [b] Allow log on locally Deny log on locally ad_gpo_map_interactive Allow log on through Remote Desktop Services Deny log on through Remote Desktop Services ad_gpo_map_remote_interactive Access this computer from the network Deny access to this computer from the network ad_gpo_map_network Allow log on as a batch job Deny log on as a batch job ad_gpo_map_batch Allow log on as a service Deny log on as a service ad_gpo_map_service [a] As named in the Group Policy Management Editor on Windows. [b] See the sssd-ad (5) man page for details about these options and for lists of pluggable authentication module (PAM) services to which the GPO options are mapped by default. 2.6.3. Configuring GPO-based Access Control for SSSD GPO-based access control can be configured in the /etc/sssd/sssd.conf file. The ad_gpo_access_control option specifies the mode in which the GPO-based access control runs. It can be set to the following values: ad_gpo_access_control = permissive The permissive value specifies that GPO-based access control is evaluated but not enforced; a syslog message is recorded every time access would be denied. This is the default setting. ad_gpo_access_control = enforcing The enforcing value specifies that GPO-based access control is evaluated and enforced. ad_gpo_access_control = disabled The disabled value specifies that GPO-based access control is neither evaluated nor enforced. Important Before starting to use the GPO-based access control and setting ad_gpo_access_control to enforcing mode, it is recommended to ensure that ad_gpo_access_control is set to permissive mode and examine the logs. By reviewing the syslog messages, you can test and adjust the current GPO settings as necessary before finally setting the enforcing mode. The following parameters related to the GPO-based access control can also be specified in the sssd.conf file: The ad_gpo_map_* options and the ad_gpo_default_right option configure which PAM services are mapped to specific Windows logon rights. To add a PAM service to the default list of PAM services mapped to a specific GPO setting, or to remove the service from the list, use the ad_gpo_map_* options. For example, to remove the su service from the list of PAM services mapped to interactive login (GPO settings Allow log on locally and Deny log on locally): The ad_gpo_cache_timeout option specifies the interval during which subsequent access control requests can reuse the files stored in the cache, instead of retrieving them from the DC anew. For a detailed list of available GPO parameters as well as their descriptions and default values, see the sssd-ad (5) man page. 2.6.4. Additional Resources For more details on configuring SSSD to work with GPOs, see Configure SSSD to respect Active Directory SSH or Console/GUI GPOs in Red Hat Knowledgebase.
|
[
"ad_gpo_map_interactive = -su"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/windows_integration_guide/sssd-gpo
|
4.2.4. I/O Monitoring (By Device)
|
4.2.4. I/O Monitoring (By Device) This section describes how to monitor I/O activity on a specific device. traceio2.stp traceio2.stp takes 1 argument: the whole device number. To get this number, use stat -c "0x%D" directory , where directory is located in the device you wish to monitor. The usrdev2kerndev() function converts the whole device number into the format understood by the kernel. The output produced by usrdev2kerndev() is used in conjunction with the MKDEV() , MINOR() , and MAJOR() functions to determine the major and minor numbers of a specific device. The output of traceio2.stp includes the name and ID of any process performing a read/write, the function it is performing (that is vfs_read or vfs_write ), and the kernel device number. The following example is an excerpt from the full output of stap traceio2.stp 0x805 , where 0x805 is the whole device number of /home . /home resides in /dev/sda5 , which is the device we wish to monitor. Example 4.8. traceio2.stp Sample Output
|
[
"#! /usr/bin/env stap global device_of_interest probe begin { /* The following is not the most efficient way to do this. One could directly put the result of usrdev2kerndev() into device_of_interest. However, want to test out the other device functions */ dev = usrdev2kerndev(USD1) device_of_interest = MKDEV(MAJOR(dev), MINOR(dev)) } probe vfs.write, vfs.read { if (dev == device_of_interest) printf (\"%s(%d) %s 0x%x\\n\", execname(), pid(), probefunc(), dev) }",
"[...] synergyc(3722) vfs_read 0x800005 synergyc(3722) vfs_read 0x800005 cupsd(2889) vfs_write 0x800005 cupsd(2889) vfs_write 0x800005 cupsd(2889) vfs_write 0x800005 [...]"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_beginners_guide/traceio2sect
|
Chapter 28. Troubleshooting director errors
|
Chapter 28. Troubleshooting director errors Errors can occur at certain stages of the director processes. This section contains some information about diagnosing common problems. 28.1. Troubleshooting node registration Issues with node registration usually occur due to issues with incorrect node details. In these situations, validate the template file containing your node details and correct the imported node details. Procedure Source the stackrc file: Run the node import command with the --validate-only option. This option validates your node template without performing an import: To fix incorrect details with imported nodes, run the openstack baremetal commands to update node details. The following example shows how to change networking details: Identify the assigned port UUID for the imported node: Update the MAC address: Configure a new IPMI address on the node: 28.2. Troubleshooting hardware introspection The Bare Metal Provisioning inspector service, ironic-inspector , times out after a default one-hour period if the inspection RAM disk does not respond. The timeout might indicate a bug in the inspection RAM disk, but usually the timeout occurs due to an environment misconfiguration. You can diagnose and resolve common environment misconfiguration issues to ensure the introspection process runs to completion. Procedure Source the stackrc undercloud credentials file: Ensure that your nodes are in a manageable state. The introspection does not inspect nodes in an available state, which is meant for deployment. If you want to inspect nodes that are in an available state, change the node status to manageable state before introspection: To configure temporary access to the introspection RAM disk during introspection debugging, use the sshkey parameter to append your public SSH key to the kernel configuration in the /httpboot/inspector.ipxe file: Run the introspection on the node: Use the --provide option to change the node state to available after the introspection completes. Identify the IP address of the node from the dnsmasq logs: If an error occurs, access the node using the root user and temporary access details: Access the node during introspection to run diagnostic commands and troubleshoot the introspection failure. To stop the introspection process, run the following command: You can also wait until the process times out. Note Red Hat OpenStack Platform director retries introspection three times after the initial abort. Run the openstack baremetal introspection abort command at each attempt to abort the introspection completely. 28.3. Troubleshooting overcloud creation and deployment The initial creation of the overcloud occurs with the OpenStack Orchestration (heat) service. If an overcloud deployment fails, use the OpenStack clients and service log files to diagnose the failed deployment. Procedure Source the stackrc file: Run the deployment failures command: Run the following command to display the details of the failure: Replace <OVERCLOUD_NAME> with the name of your overcloud. Run the following command to identify the stacks that failed: 28.4. Troubleshooting node provisioning The OpenStack Orchestration (heat) service controls the provisioning process. If node provisioning fails, use the OpenStack clients and service log files to diagnose the issues. Procedure Source the stackrc file: Check the bare metal service to see all registered nodes and their current status: All nodes available for provisioning should have the following states set: Maintenance set to False . Provision State set to available before provisioning. If a node does not have Maintenance set to False or Provision State set to available , then use the following table to identify the problem and the solution: Problem Cause Solution Maintenance sets itself to True automatically. The director cannot access the power management for the nodes. Check the credentials for node power management. Provision State is set to available but nodes do not provision. The problem occurred before bare metal deployment started. Check the node details including the profile and flavor mapping. Check that the node hardware details are within the requirements for the flavor. Provision State is set to wait call-back for a node. The node provisioning process has not yet finished for this node. Wait until this status changes. Otherwise, connect to the virtual console of the node and check the output. Provision State is active and Power State is power on but the nodes do not respond. The node provisioning has finished successfully and there is a problem during the post-deployment configuration step. Diagnose the node configuration process. Connect to the virtual console of the node and check the output. Provision State is error or deploy failed . Node provisioning has failed. View the bare metal node details with the openstack baremetal node show command and check the last_error field, which contains error description. Additional resources Bare-metal node provisioning states 28.5. Troubleshooting IP address conflicts during provisioning Introspection and deployment tasks fail if the destination hosts are allocated an IP address that is already in use. To prevent these failures, you can perform a port scan of the Provisioning network to determine whether the discovery IP range and host IP range are free. Procedure Install nmap : Use nmap to scan the IP address range for active addresses. This example scans the 192.168.24.0/24 range, replace this with the IP subnet of the Provisioning network (using CIDR bitmask notation): Review the output of the nmap scan. For example, you should see the IP address of the undercloud, and any other hosts that are present on the subnet: If any of the active IP addresses conflict with the IP ranges in undercloud.conf, you must either change the IP address ranges or release the IP addresses before you introspect or deploy the overcloud nodes. 28.6. Troubleshooting "No Valid Host Found" errors Sometimes the /var/log/nova/nova-conductor.log contains the following error: This error occurs when the Compute Scheduler cannot find a bare metal node that is suitable for booting the new instance. This usually means that there is a mismatch between resources that the Compute service expects to find and resources that the Bare Metal service advertised to Compute. To check that there is a mismatch error, complete the following steps: Procedure Source the stackrc file: Check that the introspection succeeded on the node. If the introspection fails, check that each node contains the required ironic node properties: Check that the properties JSON field has valid values for keys cpus , cpu_arch , memory_mb and local_gb . Ensure that the Compute flavor that is mapped to the node does not exceed the node properties for the required number of nodes: Run the openstack baremetal node list command to ensure that there are sufficient nodes in the available state. Nodes in manageable state usually signify a failed introspection. Run the openstack baremetal node list command and ensure that the nodes are not in maintenance mode. If a node changes to maintenance mode automatically, the likely cause is an issue with incorrect power management credentials. Check the power management credentials and then remove maintenance mode: If you are using automatic profile tagging, check that you have enough nodes that correspond to each flavor and profile. Run the openstack baremetal node show command on a node and check the capabilities key in the properties field. For example, a node tagged for the Compute role contains the profile:compute value. You must wait for node information to propagate from Bare Metal to Compute after introspection. However, if you performed some steps manually, there might be a short period of time when nodes are not available to the Compute service (nova). Use the following command to check the total resources in your system: 28.7. Troubleshooting container configuration Red Hat OpenStack Platform director uses podman to manage containers and puppet to create container configuration. This procedure shows how to diagnose a container when errors occur. Accessing the host Source the stackrc file: Get the IP address of the node with the container failure. Log in to the node: Identifying failed containers View all containers: Identify the failed container. The failed container usually exits with a non-zero status. Checking container logs Each container retains standard output from its main process. Use this output as a log to help determine what actually occurs during a container run. For example, to view the log for the keystone container, run the following command: In most cases, this log contains information about the cause of a container failure. The host also retains the stdout log for the failed service. You can find the stdout logs in /var/log/containers/stdouts/ . For example, to view the log for a failed keystone container, run the following command: Inspecting containers In some situations, you might need to verify information about a container. For example, use the following command to view keystone container data: This command returns a JSON object containing low-level configuration data. You can pipe the output to the jq command to parse specific data. For example, to view the container mounts for the keystone container, run the following command: You can also use the --format option to parse data to a single line, which is useful for running commands against sets of container data. For example, to recreate the options used to run the keystone container, use the following inspect command with the --format option: Note The --format option uses Go syntax to create queries. Use these options in conjunction with the podman run command to recreate the container for troubleshooting purposes: Running commands in a container In some cases, you might need to obtain information from within a container through a specific Bash command. In this situation, use the following podman command to execute commands within a running container. For example, run the podman exec command to run a command inside the keystone container: Note The -ti options run the command through an interactive pseudoterminal. Replace <COMMAND> with the command you want to run. For example, each container has a health check script to verify the service connection. You can run the health check script for keystone with the following command: To access the container shell, run podman exec using /bin/bash as the command you want to run inside the container: Viewing a container filesystem To view the file system for the failed container, run the podman mount command. For example, to view the file system for a failed keystone container, run the following command: This provides a mounted location to view the filesystem contents: This is useful for viewing the Puppet reports within the container. You can find these reports in the var/lib/puppet/ directory within the container mount. Exporting a container When a container fails, you might need to investigate the full contents of the file. In this case, you can export the full file system of a container as a tar archive. For example, to export the keystone container file system, run the following command: This command creates the keystone.tar archive, which you can extract and explore. 28.8. Troubleshooting Compute node failures Compute nodes use the Compute service to perform hypervisor-based operations. This means the main diagnosis for Compute nodes revolves around this service. Procedure Source the stackrc file: Get the IP address of the Compute node that contains the failure: Log in to the node: Change to the root user: View the status of the container: The primary log file for Compute nodes is /var/log/containers/nova/nova-compute.log . If issues occur with Compute node communication, use this file to begin the diagnosis. If you perform maintenance on the Compute node, migrate the existing instances from the host to an operational Compute node, then disable the node. 28.9. Creating an sosreport If you need to contact Red Hat for support with Red Hat OpenStack Platform, you might need to generate an sosreport . For more information about creating an sosreport , see: "How to collect all required logs for Red Hat Support to investigate an OpenStack issue" 28.10. Log locations Use the following logs to gather information about the undercloud and overcloud when you troubleshoot issues. Table 28.1. Logs on both the undercloud and overcloud nodes Information Log location Containerized service logs /var/log/containers/ Standard output from containerized services /var/log/containers/stdouts Ansible configuration logs ~/ansible.log Table 28.2. Additional logs on the undercloud node Information Log location Command history for openstack overcloud deploy /home/stack/.tripleo/history Undercloud installation log /home/stack/install-undercloud.log Table 28.3. Additional logs on the overcloud nodes Information Log location Cloud-Init Log /var/log/cloud-init.log High availability log /var/log/pacemaker.log
|
[
"source ~/stackrc",
"(undercloud) USD openstack overcloud node import --validate-only ~/nodes.json Waiting for messages on queue 'tripleo' with no timeout. Successfully validated environment file",
"source ~/stackrc (undercloud) USD openstack baremetal port list --node [NODE UUID]",
"(undercloud) USD openstack baremetal port set --address=[NEW MAC] [PORT UUID]",
"(undercloud) USD openstack baremetal node set --driver-info ipmi_address=[NEW IPMI ADDRESS] [NODE UUID]",
"source ~/stackrc",
"(undercloud)USD openstack baremetal node manage <node_uuid>",
"kernel http://192.2.0.1:8088/agent.kernel ipa-inspection-callback-url=http://192.168.0.1:5050/v1/continue ipa-inspection-collectors=default,extra-hardware,logs systemd.journald.forward_to_console=yes BOOTIF=USD{mac} ipa-debug=1 ipa-inspection-benchmarks=cpu,mem,disk selinux=0 sshkey=\"<public_ssh_key>\"",
"(undercloud)USD openstack overcloud node introspect <node_uuid> --provide",
"(undercloud)USD sudo tail -f /var/log/containers/ironic-inspector/dnsmasq.log",
"ssh [email protected]",
"(undercloud)USD openstack baremetal introspection abort <node_uuid>",
"source ~/stackrc",
"openstack overcloud failures",
"(undercloud) USD openstack stack failures list <OVERCLOUD_NAME> --long",
"(undercloud) USD openstack stack list --nested --property status=FAILED",
"source ~/stackrc",
"(undercloud) USD openstack baremetal node list +----------+------+---------------+-------------+-----------------+-------------+ | UUID | Name | Instance UUID | Power State | Provision State | Maintenance | +----------+------+---------------+-------------+-----------------+-------------+ | f1e261...| None | None | power off | available | False | | f0b8c1...| None | None | power off | available | False | +----------+------+---------------+-------------+-----------------+-------------+",
"sudo dnf install nmap",
"sudo nmap -sn 192.168.24.0/24",
"sudo nmap -sn 192.168.24.0/24 Starting Nmap 6.40 ( http://nmap.org ) at 2015-10-02 15:14 EDT Nmap scan report for 192.168.24.1 Host is up (0.00057s latency). Nmap scan report for 192.168.24.2 Host is up (0.00048s latency). Nmap scan report for 192.168.24.3 Host is up (0.00045s latency). Nmap scan report for 192.168.24.5 Host is up (0.00040s latency). Nmap scan report for 192.168.24.9 Host is up (0.00019s latency). Nmap done: 256 IP addresses (5 hosts up) scanned in 2.45 seconds",
"NoValidHost: No valid host was found. There are not enough hosts available.",
"source ~/stackrc",
"(undercloud) USD openstack baremetal node show [NODE UUID]",
"(undercloud) USD openstack flavor show [FLAVOR NAME]",
"(undercloud) USD openstack baremetal node maintenance unset [NODE UUID]",
"(undercloud) USD openstack hypervisor stats show",
"source ~/stackrc",
"(undercloud) USD metalsmith list",
"(undercloud) USD ssh [email protected]",
"sudo podman ps --all",
"sudo podman logs keystone",
"cat /var/log/containers/stdouts/keystone.log",
"sudo podman inspect keystone",
"sudo podman inspect keystone | jq .[0].Mounts",
"sudo podman inspect --format='{{range .Config.Env}} -e \"{{.}}\" {{end}} {{range .Mounts}} -v {{.Source}}:{{.Destination}}:{{ join .Options \",\" }}{{end}} -ti {{.Config.Image}}' keystone",
"OPTIONS=USD( sudo podman inspect --format='{{range .Config.Env}} -e \"{{.}}\" {{end}} {{range .Mounts}} -v {{.Source}}:{{.Destination}}{{if .Mode}}:{{.Mode}}{{end}}{{end}} -ti {{.Config.Image}}' keystone ) sudo podman run --rm USDOPTIONS /bin/bash",
"sudo podman exec -ti keystone <COMMAND>",
"sudo podman exec -ti keystone /openstack/healthcheck",
"sudo podman exec -ti keystone /bin/bash",
"sudo podman mount keystone",
"/var/lib/containers/storage/overlay/78946a109085aeb8b3a350fc20bd8049a08918d74f573396d7358270e711c610/merged",
"sudo podman export keystone -o keystone.tar",
"source ~/stackrc",
"(undercloud) USD openstack server list",
"(undercloud) USD ssh [email protected]",
"sudo -i",
"sudo podman ps -f name=nova_compute"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/director_installation_and_usage/assembly_troubleshooting-director-errors
|
Chapter 5. KubeletConfig [machineconfiguration.openshift.io/v1]
|
Chapter 5. KubeletConfig [machineconfiguration.openshift.io/v1] Description KubeletConfig describes a customized Kubelet configuration. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object KubeletConfigSpec defines the desired state of KubeletConfig status object KubeletConfigStatus defines the observed state of a KubeletConfig 5.1.1. .spec Description KubeletConfigSpec defines the desired state of KubeletConfig Type object Property Type Description autoSizingReserved boolean kubeletConfig `` kubeletConfig fields are defined in kubernetes upstream. Please refer to the types defined in the version/commit used by OpenShift of the upstream kubernetes. It's important to note that, since the fields of the kubelet configuration are directly fetched from upstream the validation of those values is handled directly by the kubelet. Please refer to the upstream version of the relevant kubernetes for the valid values of these fields. Invalid values of the kubelet configuration fields may render cluster nodes unusable. logLevel integer machineConfigPoolSelector object MachineConfigPoolSelector selects which pools the KubeletConfig shoud apply to. A nil selector will result in no pools being selected. tlsSecurityProfile object If unset, the default is based on the apiservers.config.openshift.io/cluster resource. Note that only Old and Intermediate profiles are currently supported, and the maximum available minTLSVersion is VersionTLS12. 5.1.2. .spec.machineConfigPoolSelector Description MachineConfigPoolSelector selects which pools the KubeletConfig shoud apply to. A nil selector will result in no pools being selected. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 5.1.3. .spec.machineConfigPoolSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 5.1.4. .spec.machineConfigPoolSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 5.1.5. .spec.tlsSecurityProfile Description If unset, the default is based on the apiservers.config.openshift.io/cluster resource. Note that only Old and Intermediate profiles are currently supported, and the maximum available minTLSVersion is VersionTLS12. Type object Property Type Description custom `` custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 intermediate `` intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: VersionTLS12 modern `` modern is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 minTLSVersion: VersionTLS13 old `` old is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 - DHE-RSA-CHACHA20-POLY1305 - ECDHE-ECDSA-AES128-SHA256 - ECDHE-RSA-AES128-SHA256 - ECDHE-ECDSA-AES128-SHA - ECDHE-RSA-AES128-SHA - ECDHE-ECDSA-AES256-SHA384 - ECDHE-RSA-AES256-SHA384 - ECDHE-ECDSA-AES256-SHA - ECDHE-RSA-AES256-SHA - DHE-RSA-AES128-SHA256 - DHE-RSA-AES256-SHA256 - AES128-GCM-SHA256 - AES256-GCM-SHA384 - AES128-SHA256 - AES256-SHA256 - AES128-SHA - AES256-SHA - DES-CBC3-SHA minTLSVersion: VersionTLS10 type string type is one of Old, Intermediate, Modern or Custom. Custom provides the ability to specify individual TLS security profile parameters. Old, Intermediate and Modern are TLS security profiles based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Recommended_configurations The profiles are intent based, so they may change over time as new ciphers are developed and existing ciphers are found to be insecure. Depending on precisely which ciphers are available to a process, the list may be reduced. Note that the Modern profile is currently not supported because it is not yet well adopted by common software libraries. 5.1.6. .status Description KubeletConfigStatus defines the observed state of a KubeletConfig Type object Property Type Description conditions array conditions represents the latest available observations of current state. conditions[] object KubeletConfigCondition defines the state of the KubeletConfig observedGeneration integer observedGeneration represents the generation observed by the controller. 5.1.7. .status.conditions Description conditions represents the latest available observations of current state. Type array 5.1.8. .status.conditions[] Description KubeletConfigCondition defines the state of the KubeletConfig Type object Property Type Description lastTransitionTime `` lastTransitionTime is the time of the last update to the current status object. message string message provides additional information about the current condition. This is only to be consumed by humans. reason string reason is the reason for the condition's last transition. Reasons are PascalCase status string status of the condition, one of True, False, Unknown. type string type specifies the state of the operator's reconciliation functionality. 5.2. API endpoints The following API endpoints are available: /apis/machineconfiguration.openshift.io/v1/kubeletconfigs DELETE : delete collection of KubeletConfig GET : list objects of kind KubeletConfig POST : create a KubeletConfig /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/{name} DELETE : delete a KubeletConfig GET : read the specified KubeletConfig PATCH : partially update the specified KubeletConfig PUT : replace the specified KubeletConfig /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/{name}/status GET : read status of the specified KubeletConfig PATCH : partially update status of the specified KubeletConfig PUT : replace status of the specified KubeletConfig 5.2.1. /apis/machineconfiguration.openshift.io/v1/kubeletconfigs HTTP method DELETE Description delete collection of KubeletConfig Table 5.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind KubeletConfig Table 5.2. HTTP responses HTTP code Reponse body 200 - OK KubeletConfigList schema 401 - Unauthorized Empty HTTP method POST Description create a KubeletConfig Table 5.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.4. Body parameters Parameter Type Description body KubeletConfig schema Table 5.5. HTTP responses HTTP code Reponse body 200 - OK KubeletConfig schema 201 - Created KubeletConfig schema 202 - Accepted KubeletConfig schema 401 - Unauthorized Empty 5.2.2. /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/{name} Table 5.6. Global path parameters Parameter Type Description name string name of the KubeletConfig HTTP method DELETE Description delete a KubeletConfig Table 5.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified KubeletConfig Table 5.9. HTTP responses HTTP code Reponse body 200 - OK KubeletConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified KubeletConfig Table 5.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK KubeletConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified KubeletConfig Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. Body parameters Parameter Type Description body KubeletConfig schema Table 5.14. HTTP responses HTTP code Reponse body 200 - OK KubeletConfig schema 201 - Created KubeletConfig schema 401 - Unauthorized Empty 5.2.3. /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/{name}/status Table 5.15. Global path parameters Parameter Type Description name string name of the KubeletConfig HTTP method GET Description read status of the specified KubeletConfig Table 5.16. HTTP responses HTTP code Reponse body 200 - OK KubeletConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified KubeletConfig Table 5.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.18. HTTP responses HTTP code Reponse body 200 - OK KubeletConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified KubeletConfig Table 5.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.20. Body parameters Parameter Type Description body KubeletConfig schema Table 5.21. HTTP responses HTTP code Reponse body 200 - OK KubeletConfig schema 201 - Created KubeletConfig schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/machine_apis/kubeletconfig-machineconfiguration-openshift-io-v1
|
17.10. Creating a Virtual Network
|
17.10. Creating a Virtual Network To create a virtual network on your system using the Virtual Machine Manager (virt-manager): Open the Virtual Networks tab from within the Connection Details menu. Click the Add Network button, identified by a plus sign (+) icon. For more information, see Section 17.9, "Managing a Virtual Network" . Figure 17.11. Virtual network configuration This will open the Create a new virtual network window. Click Forward to continue. Figure 17.12. Naming your new virtual network Enter an appropriate name for your virtual network and click Forward . Figure 17.13. Choosing an IPv4 address space Check the Enable IPv4 network address space definition check box. Enter an IPv4 address space for your virtual network in the Network field. Check the Enable DHCPv4 check box. Define the DHCP range for your virtual network by specifying a Start and End range of IP addresses. Figure 17.14. Choosing an IPv4 address space Click Forward to continue. If you want to enable IPv6, check the Enable IPv6 network address space definition . Figure 17.15. Enabling IPv6 Additional fields appear in the Create a new virtual network window. Figure 17.16. Configuring IPv6 Enter an IPv6 address in the Network field. If you want to enable DHCPv6, check the Enable DHCPv6 check box. Additional fields appear in the Create a new virtual network window. Figure 17.17. Configuring DHCPv6 (Optional) Edit the start and end of the DHCPv6 range. If you want to enable static route definitions, check the Enable Static Route Definition check box. Additional fields appear in the Create a new virtual network window. Figure 17.18. Defining static routes Enter a network address and the gateway that will be used for the route to the network in the appropriate fields. Click Forward . Select how the virtual network should connect to the physical network. Figure 17.19. Connecting to the physical network If you want the virtual network to be isolated, ensure that the Isolated virtual network radio button is selected. If you want the virtual network to connect to a physical network, select Forwarding to physical network , and choose whether the Destination should be Any physical device or a specific physical device. Also select whether the Mode should be NAT or Routed . If you want to enable IPv6 routing within the virtual network, check the Enable IPv6 internal routing/networking check box. Enter a DNS domain name for the virtual network. Click Finish to create the virtual network. The new virtual network is now available in the Virtual Networks tab of the Connection Details window.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-virtual_networking-creating_a_virtual_network
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.