title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 5. Managing images | Chapter 5. Managing images 5.1. Managing images overview With Red Hat OpenShift Service on AWS you can interact with images and set up image streams, depending on where the registries of the images are located, any authentication requirements around those registries, and how you want your builds and deployments to behave. 5.1.1. Images overview An image stream comprises any number of container images identified by tags. It presents a single virtual view of related images, similar to a container image repository. By watching an image stream, builds and deployments can receive notifications when new images are added or modified and react by performing a build or deployment, respectively. 5.2. Tagging images The following sections provide an overview and instructions for using image tags in the context of container images for working with Red Hat OpenShift Service on AWS image streams and their tags. 5.2.1. Image tags An image tag is a label applied to a container image in a repository that distinguishes a specific image from other images in an image stream. Typically, the tag represents a version number of some sort. For example, here :v3.11.59-2 is the tag: registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2 You can add additional tags to an image. For example, an image might be assigned the tags :v3.11.59-2 and :latest . Red Hat OpenShift Service on AWS provides the oc tag command, which is similar to the docker tag command, but operates on image streams instead of directly on images. 5.2.2. Image tag conventions Images evolve over time and their tags reflect this. Generally, an image tag always points to the latest image built. If there is too much information embedded in a tag name, like v2.0.1-may-2019 , the tag points to just one revision of an image and is never updated. Using default image pruning options, such an image is never removed. If the tag is named v2.0 , image revisions are more likely. This results in longer tag history and, therefore, the image pruner is more likely to remove old and unused images. Although tag naming convention is up to you, here are a few examples in the format <image_name>:<image_tag> : Table 5.1. Image tag naming conventions Description Example Revision myimage:v2.0.1 Architecture myimage:v2.0-x86_64 Base image myimage:v1.2-centos7 Latest (potentially unstable) myimage:latest Latest stable myimage:stable If you require dates in tag names, periodically inspect old and unsupported images and istags and remove them. Otherwise, you can experience increasing resource usage caused by retaining old images. 5.2.3. Adding tags to image streams An image stream in Red Hat OpenShift Service on AWS comprises zero or more container images identified by tags. There are different types of tags available. The default behavior uses a permanent tag, which points to a specific image in time. If the permanent tag is in use and the source changes, the tag does not change for the destination. A tracking tag means the destination tag's metadata is updated during the import of the source tag. Procedure You can add tags to an image stream using the oc tag command: USD oc tag <source> <destination> For example, to configure the ruby image stream static-2.0 tag to always refer to the current image for the ruby image stream 2.0 tag: USD oc tag ruby:2.0 ruby:static-2.0 This creates a new image stream tag named static-2.0 in the ruby image stream. The new tag directly references the image id that the ruby:2.0 image stream tag pointed to at the time oc tag was run, and the image it points to never changes. To ensure the destination tag is updated when the source tag changes, use the --alias=true flag: USD oc tag --alias=true <source> <destination> Note Use a tracking tag for creating permanent aliases, for example, latest or stable . The tag only works correctly within a single image stream. Trying to create a cross-image stream alias produces an error. You can also add the --scheduled=true flag to have the destination tag be refreshed, or re-imported, periodically. The period is configured globally at the system level. The --reference flag creates an image stream tag that is not imported. The tag points to the source location, permanently. If you want to instruct Red Hat OpenShift Service on AWS to always fetch the tagged image from the integrated registry, use --reference-policy=local . The registry uses the pull-through feature to serve the image to the client. By default, the image blobs are mirrored locally by the registry. As a result, they can be pulled more quickly the time they are needed. The flag also allows for pulling from insecure registries without a need to supply --insecure-registry to the container runtime as long as the image stream has an insecure annotation or the tag has an insecure import policy. 5.2.4. Removing tags from image streams You can remove tags from an image stream. Procedure To remove a tag completely from an image stream run: USD oc delete istag/ruby:latest or: USD oc tag -d ruby:latest 5.2.5. Referencing images in imagestreams You can use tags to reference images in image streams using the following reference types. Table 5.2. Imagestream reference types Reference type Description ImageStreamTag An ImageStreamTag is used to reference or retrieve an image for a given image stream and tag. ImageStreamImage An ImageStreamImage is used to reference or retrieve an image for a given image stream and image sha ID. DockerImage A DockerImage is used to reference or retrieve an image for a given external registry. It uses standard Docker pull specification for its name. When viewing example image stream definitions you may notice they contain definitions of ImageStreamTag and references to DockerImage , but nothing related to ImageStreamImage . This is because the ImageStreamImage objects are automatically created in Red Hat OpenShift Service on AWS when you import or tag an image into the image stream. You should never have to explicitly define an ImageStreamImage object in any image stream definition that you use to create image streams. Procedure To reference an image for a given image stream and tag, use ImageStreamTag : To reference an image for a given image stream and image sha ID, use ImageStreamImage : The <id> is an immutable identifier for a specific image, also called a digest. To reference or retrieve an image for a given external registry, use DockerImage : Note When no tag is specified, it is assumed the latest tag is used. You can also reference a third-party registry: Or an image with a digest: 5.3. Image pull policy Each container in a pod has a container image. After you have created an image and pushed it to a registry, you can then refer to it in the pod. 5.3.1. Image pull policy overview When Red Hat OpenShift Service on AWS creates containers, it uses the container imagePullPolicy to determine if the image should be pulled prior to starting the container. There are three possible values for imagePullPolicy : Table 5.3. imagePullPolicy values Value Description Always Always pull the image. IfNotPresent Only pull the image if it does not already exist on the node. Never Never pull the image. If a container imagePullPolicy parameter is not specified, Red Hat OpenShift Service on AWS sets it based on the image tag: If the tag is latest , Red Hat OpenShift Service on AWS defaults imagePullPolicy to Always . Otherwise, Red Hat OpenShift Service on AWS defaults imagePullPolicy to IfNotPresent . 5.4. Using image pull secrets If you are using the OpenShift image registry and are pulling from image streams located in the same project, then your pod service account should already have the correct permissions and no additional action should be required. However, for other scenarios, such as referencing images across Red Hat OpenShift Service on AWS projects or from secured registries, additional configuration steps are required. You can obtain the image pull secret from Red Hat OpenShift Cluster Manager . This pull secret is called pullSecret . You use this pull secret to authenticate with the services that are provided by the included authorities, Quay.io and registry.redhat.io , which serve the container images for Red Hat OpenShift Service on AWS components. 5.4.1. Allowing pods to reference images across projects When using the OpenShift image registry, to allow pods in project-a to reference images in project-b , a service account in project-a must be bound to the system:image-puller role in project-b . Note When you create a pod service account or a namespace, wait until the service account is provisioned with a docker pull secret; if you create a pod before its service account is fully provisioned, the pod fails to access the OpenShift image registry. Procedure To allow pods in project-a to reference images in project-b , bind a service account in project-a to the system:image-puller role in project-b : USD oc policy add-role-to-user \ system:image-puller system:serviceaccount:project-a:default \ --namespace=project-b After adding that role, the pods in project-a that reference the default service account are able to pull images from project-b . To allow access for any service account in project-a , use the group: USD oc policy add-role-to-group \ system:image-puller system:serviceaccounts:project-a \ --namespace=project-b 5.4.2. Allowing pods to reference images from other secured registries To pull a secured container from other private or secured registries, you must create a pull secret from your container client credentials, such as Docker or Podman, and add it to your service account. Both Docker and Podman use a configuration file to store authentication details to log in to secured or insecure registry: Docker : By default, Docker uses USDHOME/.docker/config.json . Podman : By default, Podman uses USDHOME/.config/containers/auth.json . These files store your authentication information if you have previously logged in to a secured or insecure registry. Note Both Docker and Podman credential files and the associated pull secret can contain multiple references to the same registry if they have unique paths, for example, quay.io and quay.io/<example_repository> . However, neither Docker nor Podman support multiple entries for the exact same registry path. Example config.json file { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io/repository-main":{ "auth":"b3Blb=", "email":"[email protected]" } } } Example pull secret apiVersion: v1 data: .dockerconfigjson: ewogICAiYXV0aHMiOnsKICAgICAgIm0iOnsKICAgICAgIsKICAgICAgICAgImF1dGgiOiJiM0JsYj0iLAogICAgICAgICAiZW1haWwiOiJ5b3VAZXhhbXBsZS5jb20iCiAgICAgIH0KICAgfQp9Cg== kind: Secret metadata: creationTimestamp: "2021-09-09T19:10:11Z" name: pull-secret namespace: default resourceVersion: "37676" uid: e2851531-01bc-48ba-878c-de96cfe31020 type: Opaque 5.4.2.1. Creating a pull secret Procedure Create a secret from an existing authentication file: For Docker clients using .docker/config.json , enter the following command: USD oc create secret generic <pull_secret_name> \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson For Podman clients using .config/containers/auth.json , enter the following command: USD oc create secret generic <pull_secret_name> \ --from-file=<path/to/.config/containers/auth.json> \ --type=kubernetes.io/podmanconfigjson If you do not already have a Docker credentials file for the secured registry, you can create a secret by running the following command: USD oc create secret docker-registry <pull_secret_name> \ --docker-server=<registry_server> \ --docker-username=<user_name> \ --docker-password=<password> \ --docker-email=<email> 5.4.2.2. Using a pull secret in a workload You can use a pull secret to allow workloads to pull images from a private registry with one of the following methods: By linking the secret to a ServiceAccount , which automatically applies the secret to all pods using that service account. By defining imagePullSecrets directly in workload configurations, which is useful for environments like GitOps or ArgoCD. Procedure You can use a secret for pulling images for pods by adding the secret to your service account. Note that the name of the service account should match the name of the service account that pod uses. The default service account is default . Enter the following command to link the pull secret to a ServiceAccount : USD oc secrets link default <pull_secret_name> --for=pull To verify the change, enter the following command: USD oc get serviceaccount default -o yaml Example output apiVersion: v1 imagePullSecrets: - name: default-dockercfg-123456 - name: <pull_secret_name> kind: ServiceAccount metadata: annotations: openshift.io/internal-registry-pull-secret-ref: <internal_registry_pull_secret> creationTimestamp: "2025-03-03T20:07:52Z" name: default namespace: default resourceVersion: "13914" uid: 9f62dd88-110d-4879-9e27-1ffe269poe3 secrets: - name: <pull_secret_name> Instead of linking the secret to a service account, you can alternatively reference it directly in your pod or workload definition. This is useful for GitOps workflows such as ArgoCD. For example: Example pod specification apiVersion: v1 kind: Pod metadata: name: <secure_pod_name> spec: containers: - name: <container_name> image: quay.io/my-private-image imagePullSecrets: - name: <pull_secret_name> Example ArgoCD workflow apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: <example_workflow> spec: entrypoint: <main_task> imagePullSecrets: - name: <pull_secret_name> 5.4.2.3. Pulling from private registries with delegated authentication A private registry can delegate authentication to a separate service. In these cases, image pull secrets must be defined for both the authentication and registry endpoints. Procedure Create a secret for the delegated authentication server: USD oc create secret docker-registry \ --docker-server=sso.redhat.com \ [email protected] \ --docker-password=******** \ --docker-email=unused \ redhat-connect-sso secret/redhat-connect-sso Create a secret for the private registry: USD oc create secret docker-registry \ --docker-server=privateregistry.example.com \ [email protected] \ --docker-password=******** \ --docker-email=unused \ private-registry secret/private-registry | [
"registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2",
"oc tag <source> <destination>",
"oc tag ruby:2.0 ruby:static-2.0",
"oc tag --alias=true <source> <destination>",
"oc delete istag/ruby:latest",
"oc tag -d ruby:latest",
"<image_stream_name>:<tag>",
"<image_stream_name>@<id>",
"openshift/ruby-20-centos7:2.0",
"registry.redhat.io/rhel7:latest",
"centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e",
"oc policy add-role-to-user system:image-puller system:serviceaccount:project-a:default --namespace=project-b",
"oc policy add-role-to-group system:image-puller system:serviceaccounts:project-a --namespace=project-b",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io/repository-main\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"apiVersion: v1 data: .dockerconfigjson: ewogICAiYXV0aHMiOnsKICAgICAgIm0iOnsKICAgICAgIsKICAgICAgICAgImF1dGgiOiJiM0JsYj0iLAogICAgICAgICAiZW1haWwiOiJ5b3VAZXhhbXBsZS5jb20iCiAgICAgIH0KICAgfQp9Cg== kind: Secret metadata: creationTimestamp: \"2021-09-09T19:10:11Z\" name: pull-secret namespace: default resourceVersion: \"37676\" uid: e2851531-01bc-48ba-878c-de96cfe31020 type: Opaque",
"oc create secret generic <pull_secret_name> --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson",
"oc create secret generic <pull_secret_name> --from-file=<path/to/.config/containers/auth.json> --type=kubernetes.io/podmanconfigjson",
"oc create secret docker-registry <pull_secret_name> --docker-server=<registry_server> --docker-username=<user_name> --docker-password=<password> --docker-email=<email>",
"oc secrets link default <pull_secret_name> --for=pull",
"oc get serviceaccount default -o yaml",
"apiVersion: v1 imagePullSecrets: - name: default-dockercfg-123456 - name: <pull_secret_name> kind: ServiceAccount metadata: annotations: openshift.io/internal-registry-pull-secret-ref: <internal_registry_pull_secret> creationTimestamp: \"2025-03-03T20:07:52Z\" name: default namespace: default resourceVersion: \"13914\" uid: 9f62dd88-110d-4879-9e27-1ffe269poe3 secrets: - name: <pull_secret_name>",
"apiVersion: v1 kind: Pod metadata: name: <secure_pod_name> spec: containers: - name: <container_name> image: quay.io/my-private-image imagePullSecrets: - name: <pull_secret_name>",
"apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: <example_workflow> spec: entrypoint: <main_task> imagePullSecrets: - name: <pull_secret_name>",
"oc create secret docker-registry --docker-server=sso.redhat.com [email protected] --docker-password=******** --docker-email=unused redhat-connect-sso secret/redhat-connect-sso",
"oc create secret docker-registry --docker-server=privateregistry.example.com [email protected] --docker-password=******** --docker-email=unused private-registry secret/private-registry"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/images/managing-images |
Chapter 4. ConsoleLink [console.openshift.io/v1] | Chapter 4. ConsoleLink [console.openshift.io/v1] Description ConsoleLink is an extension for customizing OpenShift web console links. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsoleLinkSpec is the desired console link configuration. 4.1.1. .spec Description ConsoleLinkSpec is the desired console link configuration. Type object Required href location text Property Type Description applicationMenu object applicationMenu holds information about section and icon used for the link in the application menu, and it is applicable only when location is set to ApplicationMenu. href string href is the absolute secure URL for the link (must use https) location string location determines which location in the console the link will be appended to (ApplicationMenu, HelpMenu, UserMenu, NamespaceDashboard). namespaceDashboard object namespaceDashboard holds information about namespaces in which the dashboard link should appear, and it is applicable only when location is set to NamespaceDashboard. If not specified, the link will appear in all namespaces. text string text is the display text for the link 4.1.2. .spec.applicationMenu Description applicationMenu holds information about section and icon used for the link in the application menu, and it is applicable only when location is set to ApplicationMenu. Type object Required section Property Type Description imageURL string imageUrl is the URL for the icon used in front of the link in the application menu. The URL must be an HTTPS URL or a Data URI. The image should be square and will be shown at 24x24 pixels. section string section is the section of the application menu in which the link should appear. This can be any text that will appear as a subheading in the application menu dropdown. A new section will be created if the text does not match text of an existing section. 4.1.3. .spec.namespaceDashboard Description namespaceDashboard holds information about namespaces in which the dashboard link should appear, and it is applicable only when location is set to NamespaceDashboard. If not specified, the link will appear in all namespaces. Type object Property Type Description namespaceSelector object namespaceSelector is used to select the Namespaces that should contain dashboard link by label. If the namespace labels match, dashboard link will be shown for the namespaces. namespaces array (string) namespaces is an array of namespace names in which the dashboard link should appear. 4.1.4. .spec.namespaceDashboard.namespaceSelector Description namespaceSelector is used to select the Namespaces that should contain dashboard link by label. If the namespace labels match, dashboard link will be shown for the namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 4.1.5. .spec.namespaceDashboard.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 4.1.6. .spec.namespaceDashboard.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 4.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consolelinks DELETE : delete collection of ConsoleLink GET : list objects of kind ConsoleLink POST : create a ConsoleLink /apis/console.openshift.io/v1/consolelinks/{name} DELETE : delete a ConsoleLink GET : read the specified ConsoleLink PATCH : partially update the specified ConsoleLink PUT : replace the specified ConsoleLink /apis/console.openshift.io/v1/consolelinks/{name}/status GET : read status of the specified ConsoleLink PATCH : partially update status of the specified ConsoleLink PUT : replace status of the specified ConsoleLink 4.2.1. /apis/console.openshift.io/v1/consolelinks HTTP method DELETE Description delete collection of ConsoleLink Table 4.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsoleLink Table 4.2. HTTP responses HTTP code Reponse body 200 - OK ConsoleLinkList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsoleLink Table 4.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.4. Body parameters Parameter Type Description body ConsoleLink schema Table 4.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 201 - Created ConsoleLink schema 202 - Accepted ConsoleLink schema 401 - Unauthorized Empty 4.2.2. /apis/console.openshift.io/v1/consolelinks/{name} Table 4.6. Global path parameters Parameter Type Description name string name of the ConsoleLink HTTP method DELETE Description delete a ConsoleLink Table 4.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsoleLink Table 4.9. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsoleLink Table 4.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.11. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsoleLink Table 4.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.13. Body parameters Parameter Type Description body ConsoleLink schema Table 4.14. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 201 - Created ConsoleLink schema 401 - Unauthorized Empty 4.2.3. /apis/console.openshift.io/v1/consolelinks/{name}/status Table 4.15. Global path parameters Parameter Type Description name string name of the ConsoleLink HTTP method GET Description read status of the specified ConsoleLink Table 4.16. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ConsoleLink Table 4.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.18. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ConsoleLink Table 4.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.20. Body parameters Parameter Type Description body ConsoleLink schema Table 4.21. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 201 - Created ConsoleLink schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/console_apis/consolelink-console-openshift-io-v1 |
Chapter 1. OpenShift image registry overview | Chapter 1. OpenShift image registry overview OpenShift Dedicated can build images from your source code, deploy them, and manage their lifecycle. It provides an internal, integrated container image registry that can be deployed in your OpenShift Dedicated environment to locally manage images. This overview contains reference information and links for registries commonly used with OpenShift Dedicated, with a focus on the OpenShift image registry. 1.1. Glossary of common terms for OpenShift image registry This glossary defines the common terms that are used in the registry content. container Lightweight and executable images that consist of software and all its dependencies. Because containers virtualize the operating system, you can run containers in a data center, a public or private cloud, or your local host. image repository An image repository is a collection of related container images and tags identifying images. mirror registry The mirror registry is a registry that holds the mirror of OpenShift Dedicated images. namespace A namespace isolates groups of resources within a single cluster. pod The pod is the smallest logical unit in Kubernetes. A pod is comprised of one or more containers to run in a worker node. private registry A registry is a server that implements the container image registry API. A private registry is a registry that requires authentication to allow users access its contents. public registry A registry is a server that implements the container image registry API. A public registry is a registry that serves its contently publicly. Quay.io A public Red Hat Quay Container Registry instance provided and maintained by Red Hat, which serves most of the container images and Operators to OpenShift Dedicated clusters. OpenShift image registry OpenShift image registry is the registry provided by OpenShift Dedicated to manage images. registry authentication To push and pull images to and from private image repositories, the registry needs to authenticate its users with credentials. route Exposes a service to allow for network access to pods from users and applications outside the OpenShift Dedicated instance. scale down To decrease the number of replicas. scale up To increase the number of replicas. service A service exposes a running application on a set of pods. 1.2. Integrated OpenShift image registry OpenShift Dedicated provides a built-in container image registry that runs as a standard workload on the cluster. The registry is configured and managed by an infrastructure Operator. It provides an out-of-the-box solution for users to manage the images that run their workloads, and runs on top of the existing cluster infrastructure. This registry can be scaled up or down like any other cluster workload and does not require specific infrastructure provisioning. In addition, it is integrated into the cluster user authentication and authorization system, which means that access to create and retrieve images is controlled by defining user permissions on the image resources. The registry is typically used as a publication target for images built on the cluster, as well as being a source of images for workloads running on the cluster. When a new image is pushed to the registry, the cluster is notified of the new image and other components can react to and consume the updated image. Image data is stored in two locations. The actual image data is stored in a configurable storage location, such as cloud storage or a filesystem volume. The image metadata, which is exposed by the standard cluster APIs and is used to perform access control, is stored as standard API resources, specifically images and image streams. Additional resources Image Registry Operator in OpenShift Dedicated 1.3. Third-party registries OpenShift Dedicated can create containers using images from third-party registries, but it is unlikely that these registries offer the same image notification support as the integrated OpenShift image registry. In this situation, OpenShift Dedicated will fetch tags from the remote registry upon image stream creation. To refresh the fetched tags, run oc import-image <stream> . When new images are detected, the previously described build and deployment reactions occur. 1.3.1. Authentication OpenShift Dedicated can communicate with registries to access private image repositories using credentials supplied by the user. This allows OpenShift Dedicated to push and pull images to and from private repositories. 1.3.1.1. Registry authentication with Podman Some container image registries require access authorization. Podman is an open source tool for managing containers and container images and interacting with image registries. You can use Podman to authenticate your credentials, pull the registry image, and store local images in a local file system. The following is a generic example of authenticating the registry with Podman. Procedure Use the Red Hat Ecosystem Catalog to search for specific container images from the Red Hat Repository and select the required image. Click Get this image to find the command for your container image. Log in by running the following command and entering your username and password to authenticate: USD podman login registry.redhat.io Username:<your_registry_account_username> Password:<your_registry_account_password> Download the image and save it locally by running the following command: USD podman pull registry.redhat.io/<repository_name> 1.4. Red Hat Quay registries If you need an enterprise-quality container image registry, Red Hat Quay is available both as a hosted service and as software you can install in your own data center or cloud environment. Advanced features in Red Hat Quay include geo-replication, image scanning, and the ability to roll back images. Visit the Quay.io site to set up your own hosted Quay registry account. After that, follow the Quay Tutorial to log in to the Quay registry and start managing your images. You can access your Red Hat Quay registry from OpenShift Dedicated like any remote container image registry. Additional resources Red Hat Quay product documentation 1.5. Authentication enabled Red Hat registry All container images available through the Container images section of the Red Hat Ecosystem Catalog are hosted on an image registry, registry.redhat.io . The registry, registry.redhat.io , requires authentication for access to images and hosted content on OpenShift Dedicated. Following the move to the new registry, the existing registry will be available for a period of time. Note OpenShift Dedicated pulls images from registry.redhat.io , so you must configure your cluster to use it. The new registry uses standard OAuth mechanisms for authentication, with the following methods: Authentication token. Tokens, which are generated by administrators, are service accounts that give systems the ability to authenticate against the container image registry. Service accounts are not affected by changes in user accounts, so the token authentication method is reliable and resilient. This is the only supported authentication option for production clusters. Web username and password. This is the standard set of credentials you use to log in to resources such as access.redhat.com . While it is possible to use this authentication method with OpenShift Dedicated, it is not supported for production deployments. Restrict this authentication method to stand-alone projects outside OpenShift Dedicated. You can use podman login with your credentials, either username and password or authentication token, to access content on the new registry. All image streams point to the new registry, which uses the installation pull secret to authenticate. You must place your credentials in either of the following places: openshift namespace . Your credentials must exist in the openshift namespace so that the image streams in the openshift namespace can import. Your host . Your credentials must exist on your host because Kubernetes uses the credentials from your host when it goes to pull images. Additional resources Registry service accounts | [
"podman login registry.redhat.io Username:<your_registry_account_username> Password:<your_registry_account_password>",
"podman pull registry.redhat.io/<repository_name>"
] | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/registry/registry-overview-1 |
29.3. Protecting Keytabs | 29.3. Protecting Keytabs To protect Kerberos keytabs from other users with access to the server, restrict access to the keytab to only the keytab owner. It is recommended to protect the keytabs right after they are retrieved. For example, to protect the Apache keytab at /etc/httpd/conf/ipa.keytab : Set the owner of the file to apache . Set the permissions for the file to 0600 . This grants read, write, and execute permissions to the owner. | [
"chown apache /etc/httpd/conf/ipa.keytab",
"chmod 0600 /etc/httpd/conf/ipa.keytab"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/kerberos-protecting-keytabs |
Product Guide | Product Guide Red Hat OpenStack Platform 16.2 Overview of Red Hat OpenStack Platform OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/product_guide/index |
Chapter 4. Host Grouping Concepts | Chapter 4. Host Grouping Concepts Apart from the physical topology of Capsule Servers, Red Hat Satellite provides several logical units for grouping hosts. Hosts that are members of those groups inherit the group configuration. For example, the simple parameters that define the provisioning environment can be applied at the following levels: The main logical groups in Red Hat Satellite are: Organizations - the highest level logical groups for hosts. Organizations provide a strong separation of content and configuration. Each organization requires a separate Red Hat Subscription Manifest, and can be thought of as a separate virtual instance of a Satellite Server. Avoid the use of organizations if a lower level host grouping is applicable. Locations - a grouping of hosts that should match the physical location. Locations can be used to map the network infrastructure to prevent incorrect host placement or configuration. For example, you cannot assign a subnet, domain, or compute resources directly to a Capsule Server, only to a location. Host groups - the main carriers of host definitions including assigned Puppet classes, Content View, or operating system. It is recommended to configure the majority of settings at the host group level instead of defining hosts directly. Configuring a new host then largely becomes a matter of adding it to the right host group. As host groups can be nested, you can create a structure that best fits your requirements (see Section 4.1, "Host Group Structures" ). Host collections - a host registered to Satellite Server for the purpose of subscription and content management is called content host . Content hosts can be organized into host collections, which enables performing bulk actions such as package management or errata installation. Locations and host groups can be nested, organizations and host collections are flat. 4.1. Host Group Structures The fact that host groups can be nested to inherit parameters from each other allows for designing host group hierarchies that fit particular workflows. A well planned host group structure can help to simplify the maintenance of host settings. This section outlines four approaches to organizing host groups. Figure 4.1. Host Group Structuring Examples Flat Structure The advantage of a flat structure is limited complexity, as inheritance is avoided. In a deployment with few host types, this scenario is the best option. However, without inheritance there is a risk of high duplication of settings between host groups. Life Cycle Environment Based Structure In this hierarchy, the first host group level is reserved for parameters specific to a life cycle environment. The second level contains operating system related definitions, and the third level contains application specific settings. Such structure is useful in scenarios where responsibilities are divided among life cycle environments (for example, a dedicated owner for the Development , QA , and Production life cycle stages). Application Based Structure This hierarchy is based on roles of hosts in a specific application. For example, it enables defining network settings for groups of back-end and front-end servers. The selected characteristics of hosts are segregated, which supports Puppet-focused management of complex configurations. However, the content views can only be assigned to host groups at the bottom level of this hierarchy. Location Based Structure In this hierarchy, the distribution of locations is aligned with the host group structure. In a scenario where the location (Capsule Server) topology determines many other attributes, this approach is the best option. On the other hand, this structure complicates sharing parameters across locations, therefore in complex environments with a large number of applications, the number of host group changes required for each configuration change increases significantly. | [
"Global > Organization > Location > Domain > Host group > Host"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/satellite_overview_concepts_and_deployment_considerations/chap-architecture_guide-host_grouping_concepts |
Installing on any platform | Installing on any platform OpenShift Container Platform 4.12 Installing OpenShift Container Platform on any platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_any_platform/index |
Chapter 2. Deploy OpenShift Data Foundation using local storage devices | Chapter 2. Deploy OpenShift Data Foundation using local storage devices You can deploy OpenShift Data Foundation on any platform including virtualized and cloud environments where OpenShift Container Platform is already installed. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway . Perform the following steps to deploy OpenShift Data Foundation: Install the Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Create an OpenShift Data Foundation cluster on any platform . 2.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 2.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.16 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.3. Creating OpenShift Data Foundation cluster on any platform Prerequisites Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met. If you want to use multus networking, you must create network attachment definitions (NADs) before deployment which is later attached to the cluster. For more information, see Multi network plug-in (Multus) support and Creating network attachment definitions . Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, perform the following: Select Full Deployment for the Deployment type option. Select the Create a new StorageClass using the local storage devices option. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . Important You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . The local volume set name appears as the default value for the storage class name. You can change the name. Select one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on the selected nodes. Important The flexible scaling feature is enabled only when the storage cluster that you created with three or more nodes are spread across fewer than the minimum requirement of three availability zones. For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled . Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed if at least 24 CPUs and 72 GiB of RAM is available. For minimum starting node requirements, see the Resource requirements section in the Planning guide. From the available list of Disk Type , select SSD/NVMe . Expand the Advanced section and set the following options: Volume Mode Block is selected as the default value. Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of Persistent Volumes (PVs) that you can create on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirement: To enable encryption, select Enable data encryption for block and file storage . Select one or both of the following Encryption level : Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Select one of the following: Default (SDN) If you are using a single network. Custom (Multus) If you are using multiple network interfaces. Select a Public Network Interface from the dropdown. Select a Cluster Network Interface from the dropdown. Note If you are using only one additional network interface, select the single NetworkAttachementDefinition , that is, ocs-public-cluster for the Public Network Interface and leave the Cluster Network Interface blank. Click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System Click ocs-storagecluster-storagesystem Resources . Verify that the Status of the StorageCluster is Ready and has a green tick mark to it. To verify if the flexible scaling is enabled on your storage cluster, perform the following steps (for arbiter mode, flexible scaling is disabled): In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System Click ocs-storagecluster-storagesystem Resources ocs-storagecluster . In the YAML tab, search for the keys flexibleScaling in the spec section and failureDomain in the status section. If flexible scaling is true and failureDomain is set to host, flexible scaling feature is enabled: To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation installation . To verify the multi networking (Multus), see Verifying the Multus networking . Additional resources To expand the capacity of the initial cluster, see the Scaling Storage guide and follow the instructions in the "Scaling storage of bare metal OpenShift Data Foundation cluster" section. 2.4. Verifying OpenShift Data Foundation deployment To verify that OpenShift Data Foundation is deployed correctly: Verify the state of the pods . Verify that the OpenShift Data Foundation cluster is healthy . Verify that the Multicloud Object Gateway is healthy . Verify that the OpenShift Data Foundation specific storage classes exist . Verify the Multus networking . 2.4.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) RGW rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (1 pod on any storage node) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) 2.4.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.4.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 2.4.4. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io ocs-storagecluster-ceph-rgw 2.4.5. Verifying the Multus networking To determine if Multus is working in your cluster, verify the Multus networking. Procedure Based on your Network configuration choices, the OpenShift Data Foundation operator will do one of the following: If only a single NetworkAttachmentDefinition (for example, ocs-public-cluster ) was selected for the Public Network Interface, then the traffic between the application pods and the OpenShift Data Foundation cluster will happen on this network. Additionally the cluster will be self configured to also use this network for the replication and rebalancing traffic between OSDs. If both NetworkAttachmentDefinitions (for example, ocs-public and ocs-cluster ) were selected for the Public Network Interface and the Cluster Network Interface respectively during the Storage Cluster installation, then client storage traffic will be on the public network and cluster network for the replication and rebalancing traffic between OSDs. To verify the network configuration is correct, complete the following: In the OpenShift console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources ocs-storagecluster . In the YAML tab, search for network in the spec section and ensure the configuration is correct for your network interface choices. This example is for separating the client storage traffic from the storage replication traffic. Sample output: To verify the network configuration is correct using the command line interface, run the following commands: Sample output: Confirm the OSD pods are using correct network In the openshift-storage namespace use one of the OSD pods to verify the pod has connectivity to the correct networks. This example is for separating the client storage traffic from the storage replication traffic. Note Only the OSD pods will connect to both Multus public and cluster networks if both are created. All other OCS pods will connect to the Multus public network. Sample output: To confirm the OSD pods are using correct network using the command line interface, run the following command (requires the jq utility): Sample output: | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"spec: flexibleScaling: true [...] status: failureDomain: host",
"[..] spec: [..] network: ipFamily: IPv4 provider: multus selectors: cluster: openshift-storage/ocs-cluster public: openshift-storage/ocs-public [..]",
"oc get storagecluster ocs-storagecluster -n openshift-storage -o=jsonpath='{.spec.network}{\"\\n\"}'",
"{\"ipFamily\":\"IPv4\",\"provider\":\"multus\",\"selectors\":{\"cluster\":\"openshift-storage/ocs-cluster\",\"public\":\"openshift-storage/ocs-public\"}}",
"oc get -n openshift-storage USD(oc get pods -n openshift-storage -o name -l app=rook-ceph-osd | grep 'osd-0') -o=jsonpath='{.metadata.annotations.k8s\\.v1\\.cni\\.cncf\\.io/network-status}{\"\\n\"}'",
"[{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.129.2.30\" ], \"default\": true, \"dns\": {} },{ \"name\": \"openshift-storage/ocs-cluster\", \"interface\": \"net1\", \"ips\": [ \"192.168.2.1\" ], \"mac\": \"e2:04:c6:81:52:f1\", \"dns\": {} },{ \"name\": \"openshift-storage/ocs-public\", \"interface\": \"net2\", \"ips\": [ \"192.168.1.1\" ], \"mac\": \"ee:a0:b6:a4:07:94\", \"dns\": {} }]",
"oc get -n openshift-storage USD(oc get pods -n openshift-storage -o name -l app=rook-ceph-osd | grep 'osd-0') -o=jsonpath='{.metadata.annotations.k8s\\.v1\\.cni\\.cncf\\.io/network-status}{\"\\n\"}' | jq -r '.[].name'",
"openshift-sdn openshift-storage/ocs-cluster openshift-storage/ocs-public"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_on_any_platform/deploy-using-local-storage-devices-bm |
Chapter 3. Deployment of the Ceph File System | Chapter 3. Deployment of the Ceph File System As a storage administrator, you can deploy Ceph File Systems (CephFS) in a storage environment and have clients mount those Ceph File Systems to meet the storage needs. Basically, the deployment workflow is three steps: Create a Ceph File System on a Ceph Monitor node. Create a Ceph client user with the appropriate capabilities, and make the client key available on the node where the Ceph File System will be mounted. Mount CephFS on a dedicated node, using either a kernel client or a File System in User Space (FUSE) client. 3.1. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Installation and configuration of the Ceph Metadata Server daemon ( ceph-mds ). 3.2. Layout, quota, snapshot, and network restrictions These user capabilities can help you restrict access to a Ceph File System (CephFS) based on the needed requirements. Important All user capability flags, except rw , must be specified in alphabetical order. Layouts and Quotas When using layouts or quotas, clients require the p flag, in addition to rw capabilities. Setting the p flag restricts all the attributes being set by special extended attributes, those with a ceph. prefix. Also, this restricts other means of setting these fields, such as openc operations with layouts. Example In this example, client.0 can modify layouts and quotas on the file system cephfs_a , but client.1 cannot. Snapshots When creating or deleting snapshots, clients require the s flag, in addition to rw capabilities. When the capability string also contains the p flag, the s flag must appear after it. Example In this example, client.0 can create or delete snapshots in the temp directory of file system cephfs_a . Network Restricting clients connecting from a particular network. Example The optional network and prefix length is in CIDR notation, for example, 10.3.0.0/16 . Additional Resources See the Creating client users for a Ceph File System section in the Red Hat Ceph Storage File System Guide for details on setting the Ceph user capabilities. 3.3. Creating a Ceph File System You can create a Ceph File System (CephFS) on a Ceph Monitor node. Important By default, you can create only one CephFS per Ceph Storage cluster. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Installation and configuration of the Ceph Metadata Server daemon ( ceph-mds ). Root-level access to a Ceph monitor node. Procedure Create two pools, one for storing data and one for storing metadata: Syntax Example Typically, the metadata pool can start with a conservative number of Placement Groups (PGs) as it will generally have far fewer objects than the data pool. It is possible to increase the number of PGs if needed. Recommended metadata pool sizes range from 64 PGs to 512 PGs. Size the data pool is proportional to the number and sizes of files you expect in the file system. Important For the metadata pool, consider to use: A higher replication level because any data loss to this pool can make the whole file system inaccessible. Storage with lower latency such as Solid-State Drive (SSD) disks because this directly affects the observed latency of file system operations on clients. Create the CephFS: Syntax Example Verify that one or more MDSs enter to the active state based on you configuration. Syntax Example Additional Resources See the Enabling the Red Hat Ceph Storage Repositories section in Red Hat Ceph Storage Installation Guide for more details. See the Pools chapter in the Red Hat Ceph Storage Storage Strategies Guide for more details. See the The Ceph File System section in the Red Hat Ceph Storage File System Guide for more details on the Ceph File System limitations. See the Red Hat Ceph Storage Installation Guide for details on installing Red Hat Ceph Storage. See the Installing Metadata Servers in the Red Hat Ceph Storage Installation Guide for details. 3.4. Creating Ceph File Systems with erasure coding (Technology Preview) By default, Ceph uses replicated pools for data pools. You can also add an additional erasure-coded data pool, if needed. Ceph File Systems (CephFS) backed by erasure-coded pools use less overall storage compared to Ceph File Systems backed by replicated pools. While erasure-coded pools use less overall storage, they also use more memory and processor resources than replicated pools. Important The Ceph File System using erasure-coded pools is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details. Important For production environments, Red Hat recommends using a replicated pool as the default data pool. Prerequisites A running Red Hat Ceph Storage cluster. A running CephFS environment. Pools using BlueStore OSDs. User-level access to a Ceph Monitor node. Procedure Create a replicated metadata pool for CephFS metadata: Syntax Example This example creates a pool named cephfs-metadata with 64 placement groups. Create a default replicated data pool for CephFS: Syntax Example This example creates a replicated pool named cephfs-data with 64 placement groups. Create an erasure-coded data pool for CephFS: Syntax Example This example creates an erasure-coded pool named cephfs-data-ec with 64 placement groups. Enable overwrites on the erasure-coded pool: Syntax Example This example enables overwrites on an erasure-coded pool named cephfs-data-ec . Add the erasure-coded data pool to the CephFS Metadata Server (MDS): Syntax Example Optionally, verify the data pool was added: Create the CephFS: Syntax Example Important Using an erasure-coded pool for the default data pool is not recommended. Create the CephFS using erasure coding: Syntax Example Verify that one or more Ceph FS Metadata Servers (MDS) enters the active state: Syntax Example To add a new erasure-coded data pool to an existing file system. Create an erasure-coded data pool for CephFS: Syntax Example Enable overwrites on the erasure-coded pool: Syntax Example Add the erasure-coded data pool to the CephFS Metadata Server (MDS): Syntax Example Create the CephFS using erasure coding: Syntax Example Additional Resources See the The Ceph File System Metadata Server chapter in the Red Hat Ceph Storage File System Guide for more information on the CephFS MDS. See the Installing Metadata Servers section of the Red Hat Ceph Storage Installation Guide for details on installing CephFS. See the Erasure-Coded Pools section in the Red Hat Ceph Storage Storage Strategies Guide for more information. See the Erasure Coding with Overwrites section in the Red Hat Ceph Storage Storage Strategies Guide for more information. 3.5. Creating client users for a Ceph File System Red Hat Ceph Storage uses cephx for authentication, which is enabled by default. To use cephx with the Ceph File System, create a user with the correct authorization capabilities on a Ceph Monitor node and make its key available on the node where the Ceph File System will be mounted. Prerequisites A running Red Hat Ceph Storage cluster. Installation and configuration of the Ceph Metadata Server daemon (ceph-mds). Root-level access to a Ceph monitor node. Root-level access to a Ceph client node. Procedure On a Ceph Monitor node, create a client user: Syntax To restrict the client to only writing in the temp directory of filesystem cephfs_a : Example To completely restrict the client to the temp directory, remove the root ( / ) directory: Example Note Supplying all or asterisk as the file system name grants access to every file system. Typically, it is necessary to quote the asterisk to protect it from the shell. Verify the created key: Syntax Example Copy the keyring to the client. On the Ceph Monitor node, export the keyring to a file: Syntax Example Copy the client keyring from the Ceph Monitor node to the /etc/ceph/ directory on the client node: Syntax Replace_MONITOR_NODE_NAME_with the Ceph Monitor node name or IP. Example Set the appropriate permissions for the keyring file: Syntax Example Additional Resources See the User Management chapter in the Red Hat Ceph Storage Administration Guide for more details. 3.6. Mounting the Ceph File System as a kernel client You can mount the Ceph File System (CephFS) as a kernel client, either manually or automatically on system boot. Important Clients running on other Linux distributions, aside from Red Hat Enterprise Linux, are permitted but not supported. If issues are found in the CephFS Metadata Server or other parts of the storage cluster when using these clients, Red Hat will address them. If the cause is found to be on the client side, then the issue will have to be addressed by the kernel vendor of the Linux distribution. Prerequisites Root-level access to a Linux-based client node. User-level access to a Ceph Monitor node. An existing Ceph File System. Procedure Configure the client node to use the Ceph storage cluster. Enable the Red Hat Ceph Storage 4 Tools repository: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Install the ceph-common package: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Copy the Ceph client keyring from the Ceph Monitor node to the client node: Syntax Replace MONITOR_NODE_NAME with the Ceph Monitor host name or IP address. Example Copy the Ceph configuration file from a Ceph Monitor node to the client node: Syntax Replace MONITOR_NODE_NAME with the Ceph Monitor host name or IP address. Example Set the appropriate permissions for the configuration file: Create a mount directory on the client node: Syntax Example Mount the Ceph File System. To specify multiple Ceph Monitor addresses, separate them with commas in the mount command, specify the mount point, and set the client name: Note As of Red Hat Ceph Storage 4.1, mount.ceph can read keyring files directly. As such, a secret file is no longer necessary. Just specify the client ID with name= CLIENT_ID , and mount.ceph will find the right keyring file. Syntax Example Note You can configure a DNS server so that a single host name resolves to multiple IP addresses. Then you can use that single host name with the mount command, instead of supplying a comma-separated list. Note You can also replace the Monitor host names with the string :/ and mount.ceph will read the Ceph configuration file to determine which Monitors to connect to. Verify that the file system is successfully mounted: Syntax Example Additional Resources See the mount(8) manual page. See the Ceph user management chapter in the Red Hat Ceph Storage Administration Guide for more details on creating a Ceph user. See the Creating a Ceph File System section of the Red Hat Ceph Storage File System Guide for details. 3.7. Mounting the Ceph File System as a FUSE client You can mount the Ceph File System (CephFS) as a File System in User Space (FUSE) client, either manually or automatically on system boot. Prerequisites Root-level access to a Linux-based client node. User-level access to a Ceph Monitor node. An existing Ceph File System. Procedure Configure the client node to use the Ceph storage cluster. Enable the Red Hat Ceph Storage 4 Tools repository: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Install the ceph-fuse package: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Copy the Ceph client keyring from the Ceph Monitor node to the client node: Syntax Replace MONITOR_NODE_NAME with the Ceph Monitor host name or IP address. Example Copy the Ceph configuration file from a Ceph Monitor node to the client node: Syntax Replace MONITOR_NODE_NAME with the Ceph Monitor host name or IP address. Example Set the appropriate permissions for the configuration file: Choose either automatically or manually mounting. Manually Mounting On the client node, create a directory for the mount point: Syntax Example Note If you used the path option with MDS capabilities, then the mount point must be within what is specified by path . Use the ceph-fuse utility to mount the Ceph File System. Syntax Example Note If you do not use the default name and location of the user keyring, that is /etc/ceph/ceph.client. CLIENT_ID .keyring , then use the --keyring option to specify the path to the user keyring, for example: Example Note Use the -r option to instruct the client to treat that path as its root: Syntax Example Verify that the file system is successfully mounted: Syntax Example Automatically Mounting On the client node, create a directory for the mount point: Syntax Example Note If you used the path option with MDS capabilities, then the mount point must be within what is specified by path . Edit the /etc/fstab file as follows: Syntax The first column sets the Ceph Monitor host names and the port number. The second column sets the mount point The third column sets the file system type, in this case, fuse.ceph , for CephFS. The fourth column sets the various options, such as, the user name and the secret file using the name and secretfile options, respectively. You can also set specific volumes, sub-volume groups, and sub-volumes using the ceph.client_mountpoint option. Set the _netdev option to ensure that the file system is mounted after the networking subsystem starts to prevent hanging and networking issues. If you do not need access time information, then setting the noatime option can increase performance. Set the fifth and sixth columns to zero. Example The Ceph File System will be mounted on the system boot. Additional Resources The ceph-fuse(8) manual page. See the Ceph user management chapter in the Red Hat Ceph Storage Administration Guide for more details on creating a Ceph user. See the Creating a Ceph File System section of the Red Hat Ceph Storage File System Guide for details. 3.8. Additional Resources See the Section 3.3, "Creating a Ceph File System" for details. See the Section 3.5, "Creating client users for a Ceph File System" for details. See the Section 3.6, "Mounting the Ceph File System as a kernel client" for details. See the Section 3.7, "Mounting the Ceph File System as a FUSE client" for details. See the Red Hat Ceph Storage Installation Guide for details on installing the CephFS Metadata Server. See the Chapter 2, The Ceph File System Metadata Server for details on configuring the CephFS Metadata Server daemon. | [
"client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow rwp caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a client.1 key: AQAz7EVWygILFRAAdIcuJ11opU/JKyfFmxhuaw== caps: [mds] allow rw caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a",
"client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow rw, allow rws path=/temp caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a",
"client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow r network 10.0.0.0/8, allow rw path=/bar network 10.0.0.0/8 caps: [mon] allow r network 10.0.0.0/8 caps: [osd] allow rw tag cephfs data=cephfs_a network 10.0.0.0/8",
"ceph osd pool create NAME _PG_NUM",
"ceph osd pool create cephfs_data 64 ceph osd pool create cephfs_metadata 64",
"ceph fs new NAME METADATA_POOL DATA_POOL",
"ceph fs new cephfs cephfs_metadata cephfs_data",
"ceph fs status NAME",
"ceph fs status cephfs cephfs - 0 clients ====== +------+--------+-------+---------------+-------+-------+ | Rank | State | MDS | Activity | dns | inos | +------+--------+-------+---------------+-------+-------+ | 0 | active | node1 | Reqs: 0 /s | 10 | 12 | +------+--------+-------+---------------+-------+-------+ +-----------------+----------+-------+-------+ | Pool | type | used | avail | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 4638 | 26.7G | | cephfs_data | data | 0 | 26.7G | +-----------------+----------+-------+-------+ +-------------+ | Standby MDS | +-------------+ | node3 | | node2 | +-------------+----",
"ceph osd pool create METADATA_POOL PG_NUM",
"ceph osd pool create cephfs-metadata 64",
"ceph osd pool create DATA_POOL PG_NUM",
"ceph osd pool create cephfs-data 64",
"ceph osd pool create DATA_POOL PG_NUM erasure",
"ceph osd pool create cephfs-data-ec 64 erasure",
"ceph osd pool set DATA_POOL allow_ec_overwrites true",
"ceph osd pool set cephfs-data-ec allow_ec_overwrites true",
"ceph fs add_data_pool cephfs-ec DATA_POOL",
"ceph fs add_data_pool cephfs-ec cephfs-data-ec",
"ceph fs ls",
"ceph fs new cephfs METADATA_POOL DATA_POOL",
"ceph fs new cephfs cephfs-metadata cephfs-data",
"ceph fs new cephfs-ec METADATA_POOL DATA_POOL",
"ceph fs new cephfs-ec cephfs-metadata cephfs-data-ec",
"ceph fs status FS_EC",
"ceph fs status cephfs-ec cephfs-ec - 0 clients ====== +------+--------+-------+---------------+-------+-------+ | Rank | State | MDS | Activity | dns | inos | +------+--------+-------+---------------+-------+-------+ | 0 | active | node1 | Reqs: 0 /s | 10 | 12 | +------+--------+-------+---------------+-------+-------+ +-----------------+----------+-------+-------+ | Pool | type | used | avail | +-----------------+----------+-------+-------+ | cephfs-metadata | metadata | 4638 | 26.7G | | cephfs-data | data | 0 | 26.7G | | cephfs-data-ec | data | 0 | 26.7G | +-----------------+----------+-------+-------+ +-------------+ | Standby MDS | +-------------+ | node3 | | node2 | +-------------+",
"ceph osd pool create DATA_POOL PG_NUM erasure",
"ceph osd pool create cephfs-data-ec1 64 erasure",
"ceph osd pool set DATA_POOL allow_ec_overwrites true",
"ceph osd pool set cephfs-data-ec1 allow_ec_overwrites true",
"ceph fs add_data_pool cephfs-ec DATA_POOL",
"ceph fs add_data_pool cephfs-ec cephfs-data-ec1",
"ceph fs new cephfs-ec METADATA_POOL DATA_POOL",
"ceph fs new cephfs-ec cephfs-metadata cephfs-data-ec1",
"ceph fs authorize FILE_SYSTEM_NAME client. CLIENT_NAME / DIRECTORY CAPABILITY [/ DIRECTORY CAPABILITY ]",
"ceph fs authorize cephfs_a client.1 / r /temp rw client.1 key: AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A== caps: [mds] allow r, allow rw path=/temp caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a",
"ceph fs authorize cephfs_a client.1 /temp rw",
"ceph auth get client. ID",
"ceph auth get client.1",
"ceph auth get client. ID -o ceph.client. ID .keyring",
"ceph auth get client.1 -o ceph.client.1.keyring exported keyring for client.1",
"scp root@ MONITOR_NODE_NAME :/root/ceph.client.1.keyring /etc/ceph/",
"scp root@mon:/root/ceph.client.1.keyring /etc/ceph/ceph.client.1.keyring",
"chmod 644 KEYRING",
"chmod 644 /etc/ceph/ceph.client.1.keyring",
"subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms",
"subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms",
"yum install ceph-common",
"dnf install ceph-common",
"scp root@ MONITOR_NODE_NAME :/etc/ceph/ KEYRING_FILE /etc/ceph/",
"scp [email protected]:/etc/ceph/ceph.client.1.keyring /etc/ceph/",
"scp root@ MONITOR_NODE_NAME :/etc/ceph/ceph.conf /etc/ceph/ceph.conf",
"scp [email protected]:/etc/ceph/ceph.conf /etc/ceph/ceph.conf",
"chmod 644 /etc/ceph/ceph.conf",
"mkdir -p MOUNT_POINT",
"mkdir -p /mnt/cephfs",
"mount -t ceph MONITOR-1_NAME :6789, MONITOR-2_NAME :6789, MONITOR-3_NAME :6789:/ MOUNT_POINT -o name= CLIENT_ID",
"mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o name=1",
"stat -f MOUNT_POINT",
"stat -f /mnt/cephfs",
"subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms",
"subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms",
"yum install ceph-fuse",
"dnf install ceph-fuse",
"scp root@ MONITOR_NODE_NAME :/etc/ceph/ KEYRING_FILE /etc/ceph/",
"scp [email protected]:/etc/ceph/ceph.client.1.keyring /etc/ceph/",
"scp root@ MONITOR_NODE_NAME :/etc/ceph/ceph.conf /etc/ceph/ceph.conf",
"scp [email protected]:/etc/ceph/ceph.conf /etc/ceph/ceph.conf",
"chmod 644 /etc/ceph/ceph.conf",
"mkdir PATH_TO_MOUNT_POINT",
"mkdir /mnt/mycephfs",
"ceph-fuse -n client. CLIENT_ID MOUNT_POINT",
"ceph-fuse -n client.1 /mnt/mycephfs",
"ceph-fuse -n client.1 --keyring=/etc/ceph/client.1.keyring /mnt/mycephfs",
"ceph-fuse -n client. CLIENT_ID MOUNT_POINT -r PATH",
"ceph-fuse -n client.1 /mnt/cephfs -r /home/cephfs",
"stat -f MOUNT_POINT",
"[user@client ~]USD stat -f /mnt/cephfs",
"mkdir PATH_TO_MOUNT_POINT",
"mkdir /mnt/mycephfs",
"#DEVICE PATH TYPE OPTIONS DUMP FSCK HOST_NAME :_PORT_, MOUNT_POINT fuse.ceph ceph.id= CLIENT_ID , 0 0 HOST_NAME :_PORT_, ceph.client_mountpoint=/ VOL / SUB_VOL_GROUP / SUB_VOL / UID_SUB_VOL , HOST_NAME :_PORT_:/ [ ADDITIONAL_OPTIONS ]",
"#DEVICE PATH TYPE OPTIONS DUMP FSCK mon1:6789, /mnt/cephfs fuse.ceph ceph.id=1, 0 0 mon2:6789, ceph.client_mountpoint=/my_vol/my_sub_vol_group/my_sub_vol/0, mon3:6789:/ _netdev,defaults"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/file_system_guide/deployment-of-the-ceph-file-system |
Chapter 24. FTP Sink | Chapter 24. FTP Sink Send data to an FTP Server. The Kamelet expects the following headers to be set: file / ce-file : as the file name to upload If the header won't be set the exchange ID will be used as file name. 24.1. Configuration Options The following table summarizes the configuration options available for the ftp-sink Kamelet: Property Name Description Type Default Example connectionHost * Connection Host Hostname of the FTP server string connectionPort * Connection Port Port of the FTP server string 21 directoryName * Directory Name The starting directory string password * Password The password to access the FTP server string username * Username The username to access the FTP server string fileExist File Existence How to behave in case of file already existent. There are 4 enums and the value can be one of Override, Append, Fail or Ignore string "Override" passiveMode Passive Mode Sets passive mode connection boolean false Note Fields marked with an asterisk (*) are mandatory. 24.2. Dependencies At runtime, the ftp-sink Kamelet relies upon the presence of the following dependencies: camel:ftp camel:core camel:kamelet 24.3. Usage This section describes how you can use the ftp-sink . 24.3.1. Knative Sink You can use the ftp-sink Kamelet as a Knative sink by binding it to a Knative object. ftp-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: ftp-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: ftp-sink properties: connectionHost: "The Connection Host" directoryName: "The Directory Name" password: "The Password" username: "The Username" 24.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 24.3.1.2. Procedure for using the cluster CLI Save the ftp-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f ftp-sink-binding.yaml 24.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel ftp-sink -p "sink.connectionHost=The Connection Host" -p "sink.directoryName=The Directory Name" -p "sink.password=The Password" -p "sink.username=The Username" This command creates the KameletBinding in the current namespace on the cluster. 24.3.2. Kafka Sink You can use the ftp-sink Kamelet as a Kafka sink by binding it to a Kafka topic. ftp-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: ftp-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: ftp-sink properties: connectionHost: "The Connection Host" directoryName: "The Directory Name" password: "The Password" username: "The Username" 24.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 24.3.2.2. Procedure for using the cluster CLI Save the ftp-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f ftp-sink-binding.yaml 24.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic ftp-sink -p "sink.connectionHost=The Connection Host" -p "sink.directoryName=The Directory Name" -p "sink.password=The Password" -p "sink.username=The Username" This command creates the KameletBinding in the current namespace on the cluster. 24.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/ftp-sink.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: ftp-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: ftp-sink properties: connectionHost: \"The Connection Host\" directoryName: \"The Directory Name\" password: \"The Password\" username: \"The Username\"",
"apply -f ftp-sink-binding.yaml",
"kamel bind channel:mychannel ftp-sink -p \"sink.connectionHost=The Connection Host\" -p \"sink.directoryName=The Directory Name\" -p \"sink.password=The Password\" -p \"sink.username=The Username\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: ftp-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: ftp-sink properties: connectionHost: \"The Connection Host\" directoryName: \"The Directory Name\" password: \"The Password\" username: \"The Username\"",
"apply -f ftp-sink-binding.yaml",
"kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic ftp-sink -p \"sink.connectionHost=The Connection Host\" -p \"sink.directoryName=The Directory Name\" -p \"sink.password=The Password\" -p \"sink.username=The Username\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/ftp-sink |
7.6. Overview of Bonding Modes and the Required Settings on the Switch | 7.6. Overview of Bonding Modes and the Required Settings on the Switch The following table describes the required configuration that you must apply to the upstream switch depending on the bonding mode: Table 7.1. Switch Configuration Settings Depending on the Bonding Modes Bonding Mode Configuration on the Switch 0 - balance-rr Requires static Etherchannel enabled (not LACP-negotiated) 1 - active-backup Requires autonomous ports 2 - balance-xor Requires static Etherchannel enabled (not LACP-negotiated) 3 - broadcast Requires static Etherchannel enabled (not LACP-negotiated) 4 - 802.3ad Requires LACP-negotiated Etherchannel enabled 5 - balance-tlb Requires autonomous ports 6 - balance-alb Requires autonomous ports For configuring these settings on your switch, see the switch documentation. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/overview-of-bonding-modes-and-the-required-settings-on-the-switch |
Chapter 3. Metro-DR solution for OpenShift Data Foundation | Chapter 3. Metro-DR solution for OpenShift Data Foundation This section of the guide provides details of the Metro Disaster Recovery (Metro-DR) steps and commands necessary to be able to failover an application from one OpenShift Container Platform cluster to another and then failback the same application to the original primary cluster. In this case the OpenShift Container Platform clusters will be created or imported using Red Hat Advanced Cluster Management (RHACM) and have distance limitations between the OpenShift Container Platform clusters of less than 10ms RTT latency. The persistent storage for applications is provided by an external Red Hat Ceph Storage (RHCS) cluster stretched between the two locations with the OpenShift Container Platform instances connected to this storage cluster. An arbiter node with a storage monitor service is required at a third location (different location than where OpenShift Container Platform instances are deployed) to establish quorum for the RHCS cluster in the case of a site outage. This third location can be in the range of ~100ms RTT from the storage cluster connected to the OpenShift Container Platform instances. This is a general overview of the Metro DR steps required to configure and execute OpenShift Disaster Recovery (ODR) capabilities using OpenShift Data Foundation and RHACM across two distinct OpenShift Container Platform clusters separated by distance. In addition to these two clusters called managed clusters, a third OpenShift Container Platform cluster is required that will be the Red Hat Advanced Cluster Management (RHACM) hub cluster. Important You can now easily set up Metropolitan disaster recovery solutions for workloads based on OpenShift virtualization technology using OpenShift Data Foundation. For more information, see the knowledgebase article . 3.1. Components of Metro-DR solution Metro-DR is composed of Red Hat Advanced Cluster Management for Kubernetes, Red Hat Ceph Storage and OpenShift Data Foundation components to provide application and data mobility across OpenShift Container Platform clusters. Red Hat Advanced Cluster Management for Kubernetes Red Hat Advanced Cluster Management (RHACM) provides the ability to manage multiple clusters and application lifecycles. Hence, it serves as a control plane in a multi-cluster environment. RHACM is split into two parts: RHACM Hub: components that run on the multi-cluster control plane. Managed clusters: components that run on the clusters that are managed. For more information about this product, see RHACM documentation and the RHACM "Manage Applications" documentation . Red Hat Ceph Storage Red Hat Ceph Storage is a massively scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services. It significantly lowers the cost of storing enterprise data and helps organizations manage exponential data growth. The software is a robust and modern petabyte-scale storage platform for public or private cloud deployments. For more product information, see Red Hat Ceph Storage . OpenShift Data Foundation OpenShift Data Foundation provides the ability to provision and manage storage for stateful applications in an OpenShift Container Platform cluster. It is backed by Ceph as the storage provider, whose lifecycle is managed by Rook in the OpenShift Data Foundation component stack and Ceph-CSI provides the provisioning and management of Persistent Volumes for stateful applications. OpenShift DR OpenShift DR is a disaster recovery orchestrator for stateful applications across a set of peer OpenShift clusters which are deployed and managed using RHACM and provides cloud-native interfaces to orchestrate the life-cycle of an application's state on Persistent Volumes. These include: Protecting an application and its state relationship across OpenShift clusters Failing over an application and its state to a peer cluster Relocate an application and its state to the previously deployed cluster OpenShift DR is split into three components: ODF Multicluster Orchestrator : Installed on the multi-cluster control plane (RHACM Hub), it orchestrates configuration and peering of OpenShift Data Foundation clusters for Metro and Regional DR relationships. OpenShift DR Hub Operator : Automatically installed as part of ODF Multicluster Orchestrator installation on the hub cluster to orchestrate failover or relocation of DR enabled applications. OpenShift DR Cluster Operator : Automatically installed on each managed cluster that is part of a Metro and Regional DR relationship to manage the lifecycle of all PVCs of an application. 3.2. Metro-DR deployment workflow This section provides an overview of the steps required to configure and deploy Metro-DR capabilities using the latest versions of Red Hat OpenShift Data Foundation, Red Hat Ceph Storage (RHCS) and Red Hat Advanced Cluster Management for Kubernetes (RHACM) version 2.10 or later, across two distinct OpenShift Container Platform clusters. In addition to two managed clusters, a third OpenShift Container Platform cluster will be required to deploy the Advanced Cluster Management. To configure your infrastructure, perform the below steps in the order given: Ensure requirements across the Hub, Primary and Secondary Openshift Container Platform clusters that are part of the DR solution are met. See Requirements for enabling Metro-DR . Ensure you meet the requirements for deploying Red Hat Ceph Storage stretch cluster with arbiter. See Requirements for deploying Red Hat Ceph Storage . Deploy and configure Red Hat Ceph Storage stretch mode. For instructions on enabling Ceph cluster on two different data centers using stretched mode functionality, see Deploying Red Hat Ceph Storage . Install OpenShift Data Foundation operator and create a storage system on Primary and Secondary managed clusters. See Installing OpenShift Data Foundation on managed clusters . Install the ODF Multicluster Orchestrator on the Hub cluster. See Installing ODF Multicluster Orchestrator on Hub cluster . Configure SSL access between the Hub, Primary and Secondary clusters. See Configuring SSL access across clusters . Create a DRPolicy resource for use with applications requiring DR protection across the Primary and Secondary clusters. See Creating Disaster Recovery Policy on Hub cluster . Note The Metro-DR solution can only have one DRpolicy. Testing your disaster recovery solution with: Subscription-based application: Create sample applications. See Creating sample application . Test failover and relocate operations using the sample application between managed clusters. See Subscription-based application failover and relocating subscription-based application . ApplicationSet-based application: Create sample applications. See Creating ApplicationSet-based applications . Test failover and relocate operations using the sample application between managed clusters. See ApplicationSet-based application failover and relocating ApplicationSet-based application . Discovered applications Ensure all requirements mentioned in Prerequisites is addressed. See Prerequisites for disaster recovery protection of discovered applications Create a sample discovered application. See Creating a sample discovered application Enroll the discovered application. See Enrolling a sample discovered application for disaster recovery protection Test failover and relocate. See Discovered application failover and relocate 3.3. Requirements for enabling Metro-DR The prerequisites to installing a disaster recovery solution supported by Red Hat OpenShift Data Foundation are as follows: You must have the following OpenShift clusters that have network reachability between them: Hub cluster where Red Hat Advanced Cluster Management (RHACM) for Kubernetes operator are installed. Primary managed cluster where OpenShift Data Foundation is running. Secondary managed cluster where OpenShift Data Foundation is running. Note For configuring hub recovery setup, you need a 4th cluster which acts as the passive hub. The primary managed cluster (Site-1) can be co-situated with the active RHACM hub cluster while the passive hub cluster is situated along with the secondary managed cluster (Site-2). Alternatively, the active RHACM hub cluster can be placed in a neutral site (Site-3) that is not impacted by the failures of either of the primary managed cluster at Site-1 or the secondary cluster at Site-2. In this situation, if a passive hub cluster is used it can be placed with the secondary cluster at Site-2. For more information, see Configuring passive hub cluster for hub recovery . Hub recovery is a Technology Preview feature and is subject to Technology Preview support limitations. Ensure that RHACM operator and MultiClusterHub is installed on the Hub cluster. See RHACM installation guide for instructions. After the operator is successfully installed, a popover with a message that the Web console update is available appears on the user interface. Click Refresh web console from this popover for the console changes to reflect. Important Ensure that application traffic routing and redirection are configured appropriately. On the Hub cluster Navigate to All Clusters Infrastructure Clusters . Import or create the Primary managed cluster and the Secondary managed cluster using the RHACM console. Choose the appropriate options for your environment. After the managed clusters are successfully created or imported, you can see the list of clusters that were imported or created on the console. For instructions, see Creating a cluster and Importing a target managed cluster to the hub cluster . Warning The Openshift Container Platform managed clusters and the Red Hat Ceph Storage (RHCS) nodes have distance limitations. The network latency between the sites must be below 10 milliseconds round-trip time (RTT). 3.4. Requirements for deploying Red Hat Ceph Storage stretch cluster with arbiter Red Hat Ceph Storage is an open-source enterprise platform that provides unified software-defined storage on standard, economical servers and disks. With block, object, and file storage combined into one platform, Red Hat Ceph Storage efficiently and automatically manages all your data, so you can focus on the applications and workloads that use it. This section provides a basic overview of the Red Hat Ceph Storage deployment. For more complex deployment, refer to the official documentation guide for Red Hat Ceph Storage 7 . Note Only Flash media is supported since it runs with min_size=1 when degraded. Use stretch mode only with all-flash OSDs. Using all-flash OSDs minimizes the time needed to recover once connectivity is restored, thus minimizing the potential for data loss. Important Erasure coded pools cannot be used with stretch mode. 3.4.1. Hardware requirements For information on minimum hardware requirements for deploying Red Hat Ceph Storage, see Minimum hardware recommendations for containerized Ceph . Table 3.1. Physical server locations and Ceph component layout for Red Hat Ceph Storage cluster deployment: Node name Datacenter Ceph components ceph1 DC1 OSD+MON+MGR ceph2 DC1 OSD+MON ceph3 DC1 OSD+MDS+RGW ceph4 DC2 OSD+MON+MGR ceph5 DC2 OSD+MON ceph6 DC2 OSD+MDS+RGW ceph7 DC3 MON 3.4.2. Software requirements Use the latest software version of Red Hat Ceph Storage 7 . For more information on the supported Operating System versions for Red Hat Ceph Storage, see knowledgebase article on Red Hat Ceph Storage: Supported configurations . 3.4.3. Network configuration requirements The recommended Red Hat Ceph Storage configuration is as follows: You must have two separate networks, one public network and one private network. You must have three different datacenters that support VLANS and subnets for Cephs private and public network for all datacenters. Note You can use different subnets for each of the datacenters. The latencies between the two datacenters running the Red Hat Ceph Storage Object Storage Devices (OSDs) cannot exceed 10 ms RTT. For the arbiter datacenter, this was tested with values as high up to 100 ms RTT to the other two OSD datacenters. Here is an example of a basic network configuration that we have used in this guide: DC1: Ceph public/private network: 10.0.40.0/24 DC2: Ceph public/private network: 10.0.40.0/24 DC3: Ceph public/private network: 10.0.40.0/24 For more information on the required network environment, see Ceph network configuration . 3.5. Deploying Red Hat Ceph Storage 3.5.1. Node pre-deployment steps Before installing the Red Hat Ceph Storage Ceph cluster, perform the following steps to fulfill all the requirements needed. Register all the nodes to the Red Hat Network or Red Hat Satellite and subscribe to a valid pool: subscription-manager register subscription-manager subscribe --pool=8a8XXXXXX9e0 Enable access for all the nodes in the Ceph cluster for the following repositories: rhel9-for-x86_64-baseos-rpms rhel9-for-x86_64-appstream-rpms subscription-manager repos --disable="*" --enable="rhel9-for-x86_64-baseos-rpms" --enable="rhel9-for-x86_64-appstream-rpms" Update the operating system RPMs to the latest version and reboot if needed: dnf update -y reboot Select a node from the cluster to be your bootstrap node. ceph1 is our bootstrap node in this example going forward. Only on the bootstrap node ceph1 , enable the ansible-2.9-for-rhel-9-x86_64-rpms and rhceph-6-tools-for-rhel-9-x86_64-rpms repositories: subscription-manager repos --enable="ansible-2.9-for-rhel-9-x86_64-rpms" --enable="rhceph-6-tools-for-rhel-9-x86_64-rpms" Configure the hostname using the bare/short hostname in all the hosts. hostnamectl set-hostname <short_name> Verify the hostname configuration for deploying Red Hat Ceph Storage with cephadm. USD hostname Example output: Modify /etc/hosts file and add the fqdn entry to the 127.0.0.1 IP by setting the DOMAIN variable with our DNS domain name. Check the long hostname with the fqdn using the hostname -f option. USD hostname -f Example output: Note To know more about why these changes are required, see Fully Qualified Domain Names vs Bare Host Names . Run the following steps on the bootstrap node. In our example, the bootstrap node is ceph1 . Install the cephadm-ansible RPM package: USD sudo dnf install -y cephadm-ansible Important To run the ansible playbooks, you must have ssh passwordless access to all the nodes that are configured to the Red Hat Ceph Storage cluster. Ensure that the configured user (for example, deployment-user ) has root privileges to invoke the sudo command without needing a password. To use a custom key, configure the selected user (for example, deployment-user ) ssh config file to specify the id/key that will be used for connecting to the nodes via ssh: cat <<EOF > ~/.ssh/config Host ceph* User deployment-user IdentityFile ~/.ssh/ceph.pem EOF Build the ansible inventory cat <<EOF > /usr/share/cephadm-ansible/inventory ceph1 ceph2 ceph3 ceph4 ceph5 ceph6 ceph7 [admin] ceph1 ceph4 EOF Note Here, the Hosts ( Ceph1 and Ceph4 ) belonging to two different data centers are configured as part of the [admin] group on the inventory file and are tagged as _admin by cephadm . Each of these admin nodes receive the admin ceph keyring during the bootstrap process so that when one data center is down, we can check using the other available admin node. Verify that ansible can access all nodes using the ping module before running the pre-flight playbook. USD ansible -i /usr/share/cephadm-ansible/inventory -m ping all -b Example output: Navigate to the /usr/share/cephadm-ansible directory. Run ansible-playbook with relative file paths. USD ansible-playbook -i /usr/share/cephadm-ansible/inventory /usr/share/cephadm-ansible/cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" The preflight playbook Ansible playbook configures the RHCS dnf repository and prepares the storage cluster for bootstrapping. It also installs podman, lvm2, chronyd, and cephadm. The default location for cephadm-ansible and cephadm-preflight.yml is /usr/share/cephadm-ansible . For additional information, see Running the preflight playbook 3.5.2. Cluster bootstrapping and service deployment with cephadm utility The cephadm utility installs and starts a single Ceph Monitor daemon and a Ceph Manager daemon for a new Red Hat Ceph Storage cluster on the local node where the cephadm bootstrap command is run. In this guide we are going to bootstrap the cluster and deploy all the needed Red Hat Ceph Storage services in one step using a cluster specification yaml file. If you find issues during the deployment, it may be easier to troubleshoot the errors by dividing the deployment into two steps: Bootstrap Service deployment Note For additional information on the bootstrapping process, see Bootstrapping a new storage cluster . Procedure Create json file to authenticate against the container registry using a json file as follows: USD cat <<EOF > /root/registry.json { "url":"registry.redhat.io", "username":"User", "password":"Pass" } EOF Create a cluster-spec.yaml that adds the nodes to the Red Hat Ceph Storage cluster and also sets specific labels for where the services should run following table 3.1. cat <<EOF > /root/cluster-spec.yaml service_type: host addr: 10.0.40.78 ## <XXX.XXX.XXX.XXX> hostname: ceph1 ## <ceph-hostname-1> location: root: default datacenter: DC1 labels: - osd - mon - mgr --- service_type: host addr: 10.0.40.35 hostname: ceph2 location: datacenter: DC1 labels: - osd - mon --- service_type: host addr: 10.0.40.24 hostname: ceph3 location: datacenter: DC1 labels: - osd - mds - rgw --- service_type: host addr: 10.0.40.185 hostname: ceph4 location: root: default datacenter: DC2 labels: - osd - mon - mgr --- service_type: host addr: 10.0.40.88 hostname: ceph5 location: datacenter: DC2 labels: - osd - mon --- service_type: host addr: 10.0.40.66 hostname: ceph6 location: datacenter: DC2 labels: - osd - mds - rgw --- service_type: host addr: 10.0.40.221 hostname: ceph7 labels: - mon --- service_type: mon placement: label: "mon" --- service_type: mds service_id: cephfs placement: label: "mds" --- service_type: mgr service_name: mgr placement: label: "mgr" --- service_type: osd service_id: all-available-devices service_name: osd.all-available-devices placement: label: "osd" spec: data_devices: all: true --- service_type: rgw service_id: objectgw service_name: rgw.objectgw placement: count: 2 label: "rgw" spec: rgw_frontend_port: 8080 EOF Retrieve the IP for the NIC with the Red Hat Ceph Storage public network configured from the bootstrap node. After substituting 10.0.40.0 with the subnet that you have defined in your ceph public network, execute the following command. USD ip a | grep 10.0.40 Example output: Run the cephadm bootstrap command as the root user on the node that will be the initial Monitor node in the cluster. The IP_ADDRESS option is the node's IP address that you are using to run the cephadm bootstrap command. Note If you have configured a different user instead of root for passwordless SSH access, then use the --ssh-user= flag with the cepadm bootstrap command. If you are using non default/id_rsa ssh key names, then use --ssh-private-key and --ssh-public-key options with cephadm command. USD cephadm bootstrap --ssh-user=deployment-user --mon-ip 10.0.40.78 --apply-spec /root/cluster-spec.yaml --registry-json /root/registry.json Important If the local node uses fully-qualified domain names (FQDN), then add the --allow-fqdn-hostname option to cephadm bootstrap on the command line. Once the bootstrap finishes, you will see the following output from the cephadm bootstrap command: You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid dd77f050-9afe-11ec-a56c-029f8148ea14 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/pacific/mgr/telemetry/ Verify the status of Red Hat Ceph Storage cluster deployment using the Ceph CLI client from ceph1: USD ceph -s Example output: Note It may take several minutes for all the services to start. It is normal to get a global recovery event while you do not have any OSDs configured. You can use ceph orch ps and ceph orch ls to further check the status of the services. Verify if all the nodes are part of the cephadm cluster. USD ceph orch host ls Example output: Note You can run Ceph commands directly from the host because ceph1 was configured in the cephadm-ansible inventory as part of the [admin] group. The Ceph admin keys were copied to the host during the cephadm bootstrap process. Check the current placement of the Ceph monitor services on the datacenters. USD ceph orch ps | grep mon | awk '{print USD1 " " USD2}' Example output: Check the current placement of the Ceph manager services on the datacenters. Example output: Check the ceph osd crush map layout to ensure that each host has one OSD configured and its status is UP . Also, double-check that each node is under the right datacenter bucket as specified in table 3.1 USD ceph osd tree Example output: Create and enable a new RDB block pool. Note The number 32 at the end of the command is the number of PGs assigned to this pool. The number of PGs can vary depending on several factors like the number of OSDs in the cluster, expected % used of the pool, etc. You can use the following calculator to determine the number of PGs needed: Ceph Placement Groups (PGs) per Pool Calculator . Verify that the RBD pool has been created. Example output: Verify that MDS services are active and have located one service on each datacenter. Example output: Create the CephFS volume. USD ceph fs volume create cephfs Note The ceph fs volume create command also creates the needed data and meta CephFS pools. For more information, see Configuring and Mounting Ceph File Systems . Check the Ceph status to verify how the MDS daemons have been deployed. Ensure that the state is active where ceph6 is the primary MDS for this filesystem and ceph3 is the secondary MDS. USD ceph fs status Example output: Verify that RGW services are active. USD ceph orch ps | grep rgw Example output: 3.5.3. Configuring Red Hat Ceph Storage stretch mode Once the Red Hat Ceph Storage cluster is fully deployed using cephadm , use the following procedure to configure the stretch cluster mode. The new stretch mode is designed to handle the 2-site case. Procedure Check the current election strategy being used by the monitors with the ceph mon dump command. By default in a ceph cluster, the connectivity is set to classic. ceph mon dump | grep election_strategy Example output: Change the monitor election to connectivity. ceph mon set election_strategy connectivity Run the ceph mon dump command again to verify the election_strategy value. USD ceph mon dump | grep election_strategy Example output: To know more about the different election strategies, see Configuring monitor election strategy . Set the location for all our Ceph monitors: ceph mon set_location ceph1 datacenter=DC1 ceph mon set_location ceph2 datacenter=DC1 ceph mon set_location ceph4 datacenter=DC2 ceph mon set_location ceph5 datacenter=DC2 ceph mon set_location ceph7 datacenter=DC3 Verify that each monitor has its appropriate location. USD ceph mon dump Example output: Create a CRUSH rule that makes use of this OSD crush topology by installing the ceph-base RPM package in order to use the crushtool command: USD dnf -y install ceph-base To know more about CRUSH ruleset, see Ceph CRUSH ruleset . Get the compiled CRUSH map from the cluster: USD ceph osd getcrushmap > /etc/ceph/crushmap.bin Decompile the CRUSH map and convert it to a text file in order to be able to edit it: USD crushtool -d /etc/ceph/crushmap.bin -o /etc/ceph/crushmap.txt Add the following rule to the CRUSH map by editing the text file /etc/ceph/crushmap.txt at the end of the file. USD vim /etc/ceph/crushmap.txt This example is applicable for active applications in both OpenShift Container Platform clusters. Note The rule id has to be unique. In the example, we only have one more crush rule with id 0 hence we are using id 1. If your deployment has more rules created, then use the free id. The CRUSH rule declared contains the following information: Rule name Description: A unique whole name for identifying the rule. Value: stretch_rule id Description: A unique whole number for identifying the rule. Value: 1 type Description: Describes a rule for either a storage drive replicated or erasure-coded. Value: replicated min_size Description: If a pool makes fewer replicas than this number, CRUSH will not select this rule. Value: 1 max_size Description: If a pool makes more replicas than this number, CRUSH will not select this rule. Value: 10 step take default Description: Takes the root bucket called default , and begins iterating down the tree. step choose firstn 0 type datacenter Description: Selects the datacenter bucket, and goes into its subtrees. step chooseleaf firstn 2 type host Description: Selects the number of buckets of the given type. In this case, it is two different hosts located in the datacenter it entered at the level. step emit Description: Outputs the current value and empties the stack. Typically used at the end of a rule, but may also be used to pick from different trees in the same rule. Compile the new CRUSH map from the file /etc/ceph/crushmap.txt and convert it to a binary file called /etc/ceph/crushmap2.bin : USD crushtool -c /etc/ceph/crushmap.txt -o /etc/ceph/crushmap2.bin Inject the new crushmap we created back into the cluster: USD ceph osd setcrushmap -i /etc/ceph/crushmap2.bin Example output: Note The number 17 is a counter and it will increase (18,19, and so on) depending on the changes you make to the crush map. Verify that the stretched rule created is now available for use. ceph osd crush rule ls Example output: Enable the stretch cluster mode. USD ceph mon enable_stretch_mode ceph7 stretch_rule datacenter In this example, ceph7 is the arbiter node, stretch_rule is the crush rule we created in the step and datacenter is the dividing bucket. Verify all our pools are using the stretch_rule CRUSH rule we have created in our Ceph cluster: USD for pool in USD(rados lspools);do echo -n "Pool: USD{pool}; ";ceph osd pool get USD{pool} crush_rule;done Example output: This indicates that a working Red Hat Ceph Storage stretched cluster with arbiter mode is now available. 3.6. Installing OpenShift Data Foundation on managed clusters To configure storage replication between the two OpenShift Container Platform clusters, OpenShift Data Foundation operator must be installed first on each managed cluster. Prerequisites Ensure that you have met the hardware requirements for OpenShift Data Foundation external deployments. For a detailed description of the hardware requirements, see External mode requirements . Procedure Install and configure the latest OpenShift Data Foundation cluster on each of the managed clusters. After installing the operator, create a StorageSystem using the option Full deployment type and Connect with external storage platform where your Backing storage type is Red Hat Ceph Storage . For detailed instructions, refer to Deploying OpenShift Data Foundation in external mode . Use the following flags with the ceph-external-cluster-details-exporter.py script. At a minimum, you must use the following three flags with the ceph-external-cluster-details-exporter.py script : --rbd-data-pool-name With the name of the RBD pool that was created during RHCS deployment for OpenShift Container Platform. For example, the pool can be called rbdpool . --rgw-endpoint Provide the endpoint in the format <ip_address>:<port> . It is the RGW IP of the RGW daemon running on the same site as the OpenShift Container Platform cluster that you are configuring. --run-as-user With a different client name for each site. The following flags are optional if default values were used during the RHCS deployment: --cephfs-filesystem-name With the name of the CephFS filesystem we created during RHCS deployment for OpenShift Container Platform, the default filesystem name is cephfs . --cephfs-data-pool-name With the name of the CephFS data pool we created during RHCS deployment for OpenShift Container Platform, the default pool is called cephfs.data . --cephfs-metadata-pool-name With the name of the CephFS metadata pool we created during RHCS deployment for OpenShift Container Platform, the default pool is called cephfs.meta . Run the following command on the bootstrap node ceph1 , to get the IP for the RGW endpoints in datacenter1 and datacenter2: Example output: Example output: Run the ceph-external-cluster-details-exporter.py with the parameters that are configured for the first OpenShift Container Platform managed cluster cluster1 on bootstrapped node ceph1 . Note Modify the <rgw-endpoint> XXX.XXX.XXX.XXX according to your environment. Run the ceph-external-cluster-details-exporter.py with the parameters that are configured for the first OpenShift Container Platform managed cluster cluster2 on bootstrapped node ceph1 . Note Modify the <rgw-endpoint> XXX.XXX.XXX.XXX according to your environment. Save the two files generated in the bootstrap cluster (ceph1) ocp-cluster1.json and ocp-cluster2.json to your local machine. Use the contents of file ocp-cluster1.json on the OpenShift Container Platform console on cluster1 where external OpenShift Data Foundation is being deployed. Use the contents of file ocp-cluster2.json on the OpenShift Container Platform console on cluster2 where external OpenShift Data Foundation is being deployed. Review the settings and then select Create StorageSystem . Validate the successful deployment of OpenShift Data Foundation on each managed cluster with the following command: For the Multicloud Gateway (MCG): Wait for the status result to be Ready for both queries on the Primary managed cluster and the Secondary managed cluster . On the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-external-storagecluster-storagesystem Resources . Verify that the Status of StorageCluster is Ready and has a green tick mark to it. Enable read affinity for RBD and CephFS volumes to be served from the nearest datacenter. On the Primary managed cluster, label all the nodes. Execute the following commands to enable read affinity: On the Secondary managed cluster, label all the nodes: Execute the following commands to enable read affinity: 3.7. Installing OpenShift Data Foundation Multicluster Orchestrator operator OpenShift Data Foundation Multicluster Orchestrator is a controller that is installed from OpenShift Container Platform's OperatorHub on the Hub cluster. Procedure On the Hub cluster , navigate to OperatorHub and use the keyword filter to search for ODF Multicluster Orchestrator . Click ODF Multicluster Orchestrator tile. Keep all default settings and click Install . Ensure that the operator resources are installed in openshift-operators project and available to all namespaces. Note The ODF Multicluster Orchestrator also installs the Openshift DR Hub Operator on the RHACM hub cluster as a dependency. Verify that the operator Pods are in a Running state. The OpenShift DR Hub operator is also installed at the same time in openshift-operators namespace. Example output: 3.8. Configuring SSL access across clusters Configure network (SSL) access between the primary and secondary clusters so that metadata can be stored on the alternate cluster in a Multicloud Gateway (MCG) object bucket using a secure transport protocol and in the Hub cluster for verifying access to the object buckets. Note If all of your OpenShift clusters are deployed using a signed and valid set of certificates for your environment then this section can be skipped. Procedure Extract the ingress certificate for the Primary managed cluster and save the output to primary.crt . Extract the ingress certificate for the Secondary managed cluster and save the output to secondary.crt . Create a new ConfigMap file to hold the remote cluster's certificate bundle with filename cm-clusters-crt.yaml . Note There could be more or less than three certificates for each cluster as shown in this example file. Also, ensure that the certificate contents are correctly indented after you copy and paste from the primary.crt and secondary.crt files that were created before. Create the ConfigMap on the Primary managed cluster , Secondary managed cluster , and the Hub cluster . Example output: Patch default proxy resource on the Primary managed cluster , Secondary managed cluster , and the Hub cluster . Example output: 3.9. Creating Disaster Recovery Policy on Hub cluster Openshift Disaster Recovery Policy (DRPolicy) resource specifies OpenShift Container Platform clusters participating in the disaster recovery solution and the desired replication interval. DRPolicy is a cluster scoped resource that users can apply to applications that require Disaster Recovery solution. The ODF MultiCluster Orchestrator Operator facilitates the creation of each DRPolicy and the corresponding DRClusters through the Multicluster Web console . Prerequisites Ensure that there is a minimum set of two managed clusters. Procedure On the OpenShift console , navigate to All Clusters Data Services Disaster recovery . On the Overview tab, click Create a disaster recovery policy or you can navigate to Policies tab and click Create DRPolicy . Enter Policy name . Ensure that each DRPolicy has a unique name (for example: ocp4perf1-ocp4perf2 ). Select two clusters from the list of managed clusters to which this new policy will be associated with. Replication policy is automatically set to sync based on the OpenShift clusters selected. Click Create . Verify that the DRPolicy is created successfully. Run this command on the Hub cluster for each of the DRPolicy resources created, where <drpolicy_name> is replaced with your unique name. Example output: When a DRPolicy is created, along with it, two DRCluster resources are also created. It could take up to 10 minutes for all three resources to be validated and for the status to show as Succeeded . Note Editing of SchedulingInterval , ReplicationClassSelector , VolumeSnapshotClassSelector and DRClusters field values are not supported in the DRPolicy. Verify the object bucket access from the Hub cluster to both the Primary managed cluster and the Secondary managed cluster . Get the names of the DRClusters on the Hub cluster. Example output: Check S3 access to each bucket created on each managed cluster. Use the DRCluster validation command, where <drcluster_name> is replaced with your unique name. Note Editing of Region and S3ProfileName field values are non supported in DRClusters. Example output: Note Make sure to run commands for both DRClusters on the Hub cluster . Verify that the OpenShift DR Cluster operator installation was successful on the Primary managed cluster and the Secondary managed cluster . Example output: You can also verify that OpenShift DR Cluster Operator is installed successfully on the OperatorHub of each managed cluster. Verify that the secret is propagated correctly on the Primary managed cluster and the Secondary managed cluster. Match the output with the s3SecretRef from the Hub cluster: 3.10. Configure DRClusters for fencing automation This configuration is required for enabling fencing prior to application failover. In order to prevent writes to the persistent volume from the cluster which is hit by a disaster, OpenShift DR instructs Red Hat Ceph Storage (RHCS) to fence the nodes of the cluster from the RHCS external storage. This section guides you on how to add the IPs or the IP Ranges for the nodes of the DRCluster. 3.10.1. Add node IP addresses to DRClusters Find the IP addresses for all of the OpenShift nodes in the managed clusters by running this command in the Primary managed cluster and the Secondary managed cluster . Example output: Once you have the IP addresses then the DRCluster resources can be modified for each managed cluster. Find the DRCluster names on the Hub Cluster. Example output: Edit each DRCluster to add your unique IP addresses after replacing <drcluster_name> with your unique name. Example output: Note There could be more than six IP addresses. Modify this DRCluster configuration also for IP addresses on the Secondary managed clusters in the peer DRCluster resource (e.g., ocp4perf2). 3.10.2. Add fencing annotations to DRClusters Add the following annotations to all the DRCluster resources. These annotations include details needed for the NetworkFence resource created later in these instructions (prior to testing application failover). Note Replace <drcluster_name> with your unique name. Example output: Make sure to add these annotations for both DRCluster resources (for example: ocp4perf1 and ocp4perf2 ). 3.11. Create sample application for testing disaster recovery solution OpenShift Data Foundation disaster recovery (DR) solution supports disaster recovery for Subscription-based and ApplicationSet-based applications that are managed by RHACM. For more details, see Subscriptions and ApplicationSet documentation. The following sections detail how to create an application and apply a DRPolicy to an application. Subscription-based applications OpenShift users that do not have cluster-admin permissions, see the knowledge article on how to assign necessary permissions to an application user for executing disaster recovery actions. ApplicationSet-based applications OpenShift users that do not have cluster-admin permissions cannot create ApplicationSet-based applications. 3.11.1. Subscription-based applications 3.11.1.1. Creating a sample Subscription-based application In order to test failover from the Primary managed cluster to the Secondary managed cluster and relocate , we need a sample application. Prerequisites When creating an application for general consumption, ensure that the application is deployed to ONLY one cluster. Use the sample application called busybox as an example. Ensure all external routes of the application are configured using either Global Traffic Manager (GTM) or Global Server Load Balancing (GLSB) service for traffic redirection when the application fails over or is relocated. As a best practice, group Red Hat Advanced Cluster Management (RHACM) subscriptions that belong together, refer to a single Placement Rule to DR protect them as a group. Further create them as a single application for a logical grouping of the subscriptions for future DR actions like failover and relocate. Note If unrelated subscriptions refer to the same Placement Rule for placement actions, they are also DR protected as the DR workflow controls all subscriptions that references the Placement Rule. Procedure On the Hub cluster, navigate to Applications and click Create application . Select type as Subscription . Enter your application Name (for example, busybox ) and Namespace (for example, busybox-sample ). In the Repository location for resources section, select Repository type Git . Enter the Git repository URL for the sample application, the github Branch and Path where the resources busybox Pod and PVC will be created. Use the sample application repository as https://github.com/red-hat-storage/ocm-ramen-samples where the Branch is release-4.17 and Path is busybox-odr-metro . Scroll down in the form until you see Deploy application resources on clusters with all specified labels . Select the global Cluster sets or the one that includes the correct managed clusters for your environment. Add a label <name> with its value set to the managed cluster name. Click Create which is at the top right hand corner. On the follow-on screen go to the Topology tab. You should see that there are all Green checkmarks on the application topology. Note To get more information, click on any of the topology elements and a window will appear on the right of the topology view. Validating the sample application deployment. Now that the busybox application has been deployed to your preferred Cluster, the deployment can be validated. Log in to your managed cluster where busybox was deployed by RHACM. Example output: 3.11.1.2. Apply Data policy to sample application Prerequisites Ensure that both managed clusters referenced in the Data policy are reachable. If not, the application will not be protected for disaster recovery until both clusters are online. Procedure On the Hub cluster, navigate to All Clusters Applications . Click the Actions menu at the end of application to view the list of available actions. Click Manage data policy Assign data policy . Select Policy and click . Select an Application resource and then use PVC label selector to select PVC label for the selected application resource. Note You can select more than one PVC label for the selected application resources. You can also use the Add application resource option to add multiple resources. After adding all the application resources, click . Review the Policy configuration details and click Assign . The newly assigned Data policy is displayed on the Manage data policy modal list view. Verify that you can view the assigned policy details on the Applications page. On the Applications page, navigate to the Data policy column and click the policy link to expand the view. Verify that you can see the number of policies assigned along with failover and relocate status. Click View more details to view the status of ongoing activities with the policy in use with the application. After you apply DRPolicy to the applications, confirm whether the ClusterDataProtected is set to True in the drpc yaml output. 3.11.2. ApplicationSet-based applications 3.11.2.1. Creating ApplicationSet-based applications Prerequisite Ensure that the Red Hat OpenShift GitOps operator is installed on all three clusters: Hub cluster , Primary managed cluster and Secondary managed cluster . For instructions, see Installing Red Hat OpenShift GitOps Operator in web console . On the Hub cluster, ensure that both Primary and Secondary managed clusters are registered to GitOps. For registration instructions, see Registering managed clusters to GitOps . Then check if the Placement used by GitOpsCluster resource to register both managed clusters, has the tolerations to deal with cluster unavailability. You can verify if the following tolerations are added to the Placement using the command oc get placement <placement-name> -n openshift-gitops -o yaml . In case the tolerations are not added, see Configuring application placement tolerations for Red Hat Advanced Cluster Management and OpenShift GitOps . Ensure that you have created the ClusterRoleBinding yaml on both the Primary and Secondary managed clusters. For instruction, see the Prerequisites chapter in RHACM documentation . Procedure On the Hub cluster, navigate to All Clusters Applications and click Create application . Choose the application type as Argo CD ApplicationSet - Pull model . In the General step, enter your Application set name . Select Argo server openshift-gitops and Requeue time as 180 seconds. Click . In the Repository location for resources section, select Repository type Git . Enter the Git repository URL for the sample application, the github Branch and Path where the resources busybox Pod and PVC will be created. Use the sample application repository as https://github.com/red-hat-storage/ocm-ramen-samples Select Revision as release-4.17 Choose Path as busybox-odr-metro . Enter Remote namespace value. (example, busybox-sample) and click . Choose the Sync policy settings as per your requirement or go with the default selections, and then click . You can choose one or more options. In Label expressions, add a label <name> with its value set to the managed cluster name. Click . Review the setting details and click Submit . 3.11.2.2. Apply Data policy to sample ApplicationSet-based application Prerequisites Ensure that both managed clusters referenced in the Data policy are reachable. If not, the application will not be protected for disaster recovery until both clusters are online. Procedure On the Hub cluster, navigate to All Clusters Applications . Click the Actions menu at the end of application to view the list of available actions. Click Manage data policy Assign data policy . Select Policy and click . Select an Application resource and then use PVC label selector to select PVC label for the selected application resource. Note You can select more than one PVC label for the selected application resources. After adding all the application resources, click . Review the Policy configuration details and click Assign . The newly assigned Data policy is displayed on the Manage data policy modal list view. Verify that you can view the assigned policy details on the Applications page. On the Applications page, navigate to the Data policy column and click the policy link to expand the view. Verify that you can see the number of policies assigned along with failover and relocate status. After you apply DRPolicy to the applications, confirm whether the ClusterDataProtected is set to True in the drpc yaml output. 3.11.3. Deleting sample application This section provides instructions for deleting the sample application busybox using the RHACM console. Important When deleting a DR protected application, access to both clusters that belong to the DRPolicy is required. This is to ensure that all protected API resources and resources in the respective S3 stores are cleaned up as part of removing the DR protection. If access to one of the clusters is not healthy, deleting the DRPlacementControl resource for the application, on the hub, would remain in the Deleting state. Prerequisites These instructions to delete the sample application should not be executed until the failover and relocate testing is completed and the application is ready to be removed from RHACM and the managed clusters. Procedure On the RHACM console, navigate to Applications . Search for the sample application to be deleted (for example, busybox ). Click the Action Menu (...) to the application you want to delete. Click Delete application . When the Delete application is selected a new screen will appear asking if the application related resources should also be deleted. Select Remove application related resources checkbox to delete the Subscription and PlacementRule. Click Delete . This will delete the busybox application on the Primary managed cluster (or whatever cluster the application was running on). In addition to the resources deleted using the RHACM console, delete the DRPlacementControl if it is not auto-deleted after deleting the busybox application. Log in to the OpenShift Web console for the Hub cluster and navigate to Installed Operators for the project busybox-sample . For ApplicationSet applications, select the project as openshift-gitops . Click OpenShift DR Hub Operator and then click the DRPlacementControl tab. Click the Action Menu (...) to the busybox application DRPlacementControl that you want to delete. Click Delete DRPlacementControl . Click Delete . Note This process can be used to delete any application with a DRPlacementControl resource. 3.12. Subscription-based application failover between managed clusters Perform a failover when a managed cluster becomes unavailable, due to any reason. This failover method is application-based. Prerequisites If your setup has active and passive RHACM hub clusters, see Hub recovery using Red Hat Advanced Cluster Management . When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Navigate to the RHACM console Infrastructure Clusters Cluster list tab. Check the status of both the managed clusters individually before performing failover operation. However, failover operation can still be performed when the cluster you are failing over to is in a Ready state. Procedure Enable fencing on the Hub cluster . Open CLI terminal and edit the DRCluster resource , where <drcluster_name> is your unique name. Caution Once the managed cluster is fenced, all communication from applications to the OpenShift Data Foundation external storage cluster will fail and some Pods will be in an unhealthy state (for example: CreateContainerError , CrashLoopBackOff ) on the cluster that is now fenced. Example output: Verify the fencing status on the Hub cluster for the Primary managed cluster , replacing <drcluster_name> is your unique identifier. Example output: Login to your Ceph cluster and verify that the IPs that belong to the OpenShift Container Platform cluster nodes are now in the blocklist. Example output On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Failover application . After the Failover application modal is shown, select policy and target cluster to which the associated application will failover in case of a disaster. Click the Select subscription group dropdown to verify the default selection or modify this setting. By default, the subscription group that replicates for the application resources is selected. Check the status of the Failover readiness . If the status is Ready with a green tick, it indicates that the target cluster is ready for failover to start. Proceed to step 7. If the status is Unknown or Not ready , then wait until the status changes to Ready . Click Initiate . The busybox application is now failing over to the Secondary-managed cluster . Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as FailedOver for the application. Navigate to the Applications Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, click the View more details link. 3.13. ApplicationSet-based application failover between managed clusters Perform a failover when a managed cluster becomes unavailable, due to any reason. This failover method is application-based. Prerequisites If your setup has active and passive RHACM hub clusters, see Hub recovery using Red Hat Advanced Cluster Management . When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Navigate to the RHACM console Infrastructure Clusters Cluster list tab. Check the status of both the managed clusters individually before performing failover operation. However, failover operation can still be performed when the cluster you are failing over to is in a Ready state. Procedure Enable fencing on the Hub cluster . Open CLI terminal and edit the DRCluster resource , where <drcluster_name> is your unique name. Caution Once the managed cluster is fenced, all communication from applications to the OpenShift Data Foundation external storage cluster will fail and some Pods will be in an unhealthy state (for example: CreateContainerError , CrashLoopBackOff ) on the cluster that is now fenced. Example output: Verify the fencing status on the Hub cluster for the Primary managed cluster , replacing <drcluster_name> is your unique identifier. Example output: Login to your Ceph cluster and verify that the IPs that belong to the OpenShift Container Platform cluster nodes are now in the blocklist. Example output On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Failover application . When the Failover application modal is shown, verify the details presented are correct and check the status of the Failover readiness . If the status is Ready with a green tick, it indicates that the target cluster is ready for failover to start. Click Initiate . The busybox resources are now created on the target cluster. Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as FailedOver for the application. Navigate to the Applications Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, verify that you can see one or more policy names and the ongoing activities associated with the policy in use with the application. 3.14. Relocating Subscription-based application between managed clusters Relocate an application to its preferred location when all managed clusters are available. Prerequisite If your setup has active and passive RHACM hub clusters, see Hub recovery using Red Hat Advanced Cluster Management . When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Relocate can only be performed when both primary and preferred clusters are up and running. Navigate to RHACM console Infrastructure Clusters Cluster list tab. Check the status of both the managed clusters individually before performing relocate operation. Verify that applications were cleaned up from the cluster before unfencing it. Procedure Disable fencing on the Hub cluster. Edit the DRCluster resource for this cluster, replacing <drcluster_name> with a unique name. Example output: Gracefully reboot OpenShift Container Platform nodes that were Fenced . A reboot is required to resume the I/O operations after unfencing to avoid any further recovery orchestration failures. Reboot all nodes of the cluster by following the steps in the procedure, Rebooting a node gracefully . Note Make sure that all the nodes are initially cordoned and drained before you reboot and perform uncordon operations on the nodes. After all OpenShift nodes are rebooted and are in a Ready status, verify that all Pods are in a healthy state by running this command on the Primary managed cluster (or whatever cluster has been Unfenced). Example output: The output for this query should be zero Pods before proceeding to the step. Important If there are Pods still in an unhealthy status because of severed storage communication, troubleshoot and resolve before continuing. Because the storage cluster is external to OpenShift, it also has to be properly recovered after a site outage for OpenShift applications to be healthy. Alternatively, you can use the OpenShift Web Console dashboards and Overview tab to assess the health of applications and the external ODF storage cluster. The detailed OpenShift Data Foundation dashboard is found by navigating to Storage Data Foundation . Verify that the Unfenced cluster is in a healthy state. Validate the fencing status in the Hub cluster for the Primary-managed cluster, replacing <drcluster_name> with a unique name. Example output: Login to your Ceph cluster and verify that the IPs that belong to the OpenShift Container Platform cluster nodes are NOT in the blocklist. Ensure that you do not see the IPs added during fencing. On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Relocate application . When the Relocate application modal is shown, select policy and target cluster to which the associated application will relocate to in case of a disaster. By default, the subscription group that will deploy the application resources is selected. Click the Select subscription group dropdown to verify the default selection or modify this setting. Check the status of the Relocation readiness . If the status is Ready with a green tick, it indicates that the target cluster is ready for relocation to start. Proceed to step 7. If the status is Unknown or Not ready , then wait until the status changes to Ready . Click Initiate . The busybox resources are now created on the target cluster. Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as Relocated for the application. Navigate to the Applications Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, click the View more details link. 3.15. Relocating an ApplicationSet-based application between managed clusters Relocate an application to its preferred location when all managed clusters are available. Prerequisite If your setup has active and passive RHACM hub clusters, see Hub recovery using Red Hat Advanced Cluster Management . When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Relocate can only be performed when both primary and preferred clusters are up and running. Navigate to RHACM console Infrastructure Clusters Cluster list tab. Check the status of both the managed clusters individually before performing relocate operation. Verify that applications were cleaned up from the cluster before unfencing it. Procedure Disable fencing on the Hub cluster. Edit the DRCluster resource for this cluster, replacing <drcluster_name> with a unique name. Example output: Gracefully reboot OpenShift Container Platform nodes that were Fenced . A reboot is required to resume the I/O operations after unfencing to avoid any further recovery orchestration failures. Reboot all nodes of the cluster by following the steps in the procedure, Rebooting a node gracefully . Note Make sure that all the nodes are initially cordoned and drained before you reboot and perform uncordon operations on the nodes. After all OpenShift nodes are rebooted and are in a Ready status, verify that all Pods are in a healthy state by running this command on the Primary managed cluster (or whatever cluster has been Unfenced). Example output: The output for this query should be zero Pods before proceeding to the step. Important If there are Pods still in an unhealthy status because of severed storage communication, troubleshoot and resolve before continuing. Because the storage cluster is external to OpenShift, it also has to be properly recovered after a site outage for OpenShift applications to be healthy. Alternatively, you can use the OpenShift Web Console dashboards and Overview tab to assess the health of applications and the external ODF storage cluster. The detailed OpenShift Data Foundation dashboard is found by navigating to Storage Data Foundation . Verify that the Unfenced cluster is in a healthy state. Validate the fencing status in the Hub cluster for the Primary-managed cluster, replacing <drcluster_name> with a unique name. Example output: Login to your Ceph cluster and verify that the IPs that belong to the OpenShift Container Platform cluster nodes are NOT in the blocklist. Ensure that you do not see the IPs added during fencing. On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Relocate application . When the Relocate application modal is shown, select policy and target cluster to which the associated application will relocate to in case of a disaster. Click Initiate . The busybox resources are now created on the target cluster. Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as Relocated for the application. Navigate to the Applications Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, verify that you can see one or more policy names and the relocation status associated with the policy in use with the application. 3.16. Disaster recovery protection for discovered applications Red Hat OpenShift Data Foundation now provides disaster recovery (DR) protection and support for workloads that are deployed in one of the managed clusters directly without using Red Hat Advanced Cluster Management (RHACM). These workloads are called discovered applications. The workloads that are deployed using RHACM are now called managed applications. When a workload is deployed directly on one of the managed clusters without using RHACM, then those workloads are called discovered applications. Though these workload details can be seen on the RHACM console, the application lifecycle (create, delete, edit) is not managed by RHACM. 3.16.1. Prerequisites for disaster recovery protection of discovered applications This section provides instructions to guide you through the prerequisites for protecting discovered applications. This includes tasks such as assigning a data policy and initiating DR actions such as failover and relocate. Ensure that all the DR configurations have been installed on the Primary managed cluster and the Secondary managed cluster. Install the OADP 1.4 operator. Note Any version before OADP 1.4 will not work for protecting discovered applications. On the Primary and Secondary managed cluster , navigate to OperatorHub and use the keyword filter to search for OADP . Click the OADP tile. Keep all default settings and click Install . Ensure that the operator resources are installed in the openshift-adp project. Note If OADP 1.4 is installed after DR configuration has been completed then the ramen-dr-cluster-operator pods on the Primary managed cluster and the Secondary managed cluster in namespace openshift-dr-system must be restarted (deleted and recreated). [Optional] Add CACertificates to ramen-hub-operator-config ConfigMap . Configure network (SSL) access between the primary and secondary clusters so that metadata can be stored on the alternate cluster in a Multicloud Gateway (MCG) object bucket using a secure transport protocol and in the Hub cluster for verifying access to the object buckets. Note If all of your OpenShift clusters are deployed using a signed and valid set of certificates for your environment then this section can be skipped. If you are using self-signed certificates, then you have already created a ConfigMap named user-ca-bundle in the openshift-config namespace and added this ConfigMap to the default Proxy cluster resource. Find the encoded value for the CACertificates. Add this base64 encoded value to the configmap ramen-hub-operator-config on the Hub cluster. Example below shows where to add CACertificates. Verify that there are DR secrets created in the OADP operator default namespace openshift-adp on the Primary managed cluster and the Secondary managed cluster . The DR secrets that were created when the first DRPolicy was created, will be similar to the secrets below. The DR secret name is preceded with the letter v . Note There will be one DR created secret for each managed cluster in the openshift-adp namespace. Verify if the Data Protection Application (DPA) is already installed on each managed cluster in the OADP namespace openshift-adp . If not already created then follow the step to create this resource. Create the DPA by copying the following YAML definition content to dpa.yaml . Create the DPA resource. Verify that the OADP resources are created and are in Running state. 3.16.2. Creating a sample discovered application In order to test failover from the Primary managed cluster to the Secondary managed cluster and relocate for discovered applications, you need a sample application that is installed without using the RHACM create application capability. Procedure Log in to the Primary managed cluster and clone the sample application repository. Verify that you are on the main branch. The correct directory should be used when creating the sample application based on your scenario, metro or regional. Note Only applications using CephRBD or block volumes are supported for discovered applications. Create a project named busybox-discovered on both the Primary and Secondary managed clusters . Create the busybox application on the Primary managed cluster . This sample application example is for Metro-DR using a block (Ceph RBD) volume. Note OpenShift Data Foundation Disaster Recovery solution now extends protection to discovered applications that span across multiple namespaces. Verify that busybox is running in the correct project on the Primary managed cluster . 3.16.3. Enrolling a sample discovered application for disaster recovery protection This section guides you on how to apply an existing DR Policy to a discovered application from the Protected applications tab. Prerequisites Ensure that Disaster Recovery has been configured and that at least one DR Policy has been created. Procedure On RHACM console, navigate to Disaster recovery Protected applications tab. Click Enroll application to start configuring existing applications for DR protection. Select ACM discovered applications . In the Namespace page, choose the DR cluster which is the name of the Primary managed cluster where busybox is installed. Select namespace where the application is installed. For example, busybox-discovered . Note If you have workload spread across multiple namespaces then you can select all of those namespaces to DR protect. Choose a unique Name , for example busybox-rbd , for the discovered application and click . In the Configuration page, select either Resource label or Recipe . Resource label is used to protect your resources where you can set which resources will be included in the kubernetes-object backup and what volume's persistent data will be replicated. If you selected Resource label , provide label expressions and PVC label selector. Choose the label appname=busybox for both the kubernetes-objects and for the PVC(s) . If you selected Recipe , then from the Recipe list select the name of the recipe. Important The recipe resource must be created in the application namespace on both managed clusters before enrolling an application for disaster recovery. Click . In the Replication page, select an existing DR Policy and the kubernetes-objects backup interval . Note It is recommended to choose the same duration for the PVC data replication and kubernetes-object backup interval (i.e., 5 minutes). Click . Review the configuration and click Save . Use the Back button to go back to the screen to correct any issues. Verify that the Application volumes (PVCs) and the Kubernetes-objects backup have a Healthy status before proceeding to DR Failover and Relocate testing. You can view the status of your Discovered applications on the Protected applications tab. To see the status of the DRPC, run the following command on the Hub cluster: The discovered applications store resources such as DRPlacementControl (DRPC) and Placement on the Hub cluster in a new namespace called openshift-dr-ops . The DRPC name can be identified by the unique Name configured in prior steps (i.e., busybox-rbd ). To see the status of the VolumeReplicationGroup (VRG) for discovered applications, run the following command on the managed cluster where the busybox application was manually installed. The VRG resource is stored in the namespace openshift-dr-ops after a DR Policy is assigned to the discovered application. The VRG name can be identified by the unique Name configured in prior steps (i.e., busybox-rbd ). 3.16.4. Discovered application failover and relocate A protected Discovered application can Failover or Relocate to its peer cluster similar to managed applications . However, there are some additional steps for discovered applications since RHACM does not manage the lifecycle of the application as it does for Managed applications. This section guides you through the Failover and Relocate process for a protected discovered application. Important Never initiate a Failover or Relocate of an application when one or both resource types are in a Warning or Critical status. 3.16.4.1. Failover disaster recovery protected discovered application This section guides you on how to failover a discovered application which is disaster recovery protected. Prerequisites Ensure that the application namespace is created in both managed clusters (for example, busybox-discovered ). Procedure Enable fencing on the Hub cluster . Open CLI terminal and edit the DRCluster resource , where <drcluster_name> is your unique name. Caution Once the managed cluster is fenced, all communication from applications to the OpenShift Data Foundation external storage cluster will fail and some Pods will be in an unhealthy state (for example: CreateContainerError , CrashLoopBackOff ) on the cluster that is now fenced. Example output: Verify the fencing status on the Hub cluster for the Primary managed cluster , replacing <drcluster_name> is your unique identifier. Example output: Login to your Ceph cluster and verify that the IPs that belong to the OpenShift Container Platform cluster nodes are now in the blocklist. Example output In the RHACM console, navigate to Disaster Recovery Protected applications tab. At the end of the application row, click on the Actions menu and choose to initiate Failover . In the Failover application modal window, review the status of the application and the target cluster. Click Initiate . Wait for the Failover process to complete. Verify that the busybox application is running on the Secondary managed cluster . Check the progression status of Failover until the result is WaitOnUserToCleanup . The DRPC name can be identified by the unique Name configured in prior steps (for example, busybox-rbd ). Remove the busybox application from the Primary managed cluster to complete the Failover process. Navigate to the Protected applications tab. You will see a message to remove the application. Navigate to the cloned repository for busybox and run the following commands on the Primary managed cluster where you failed over from. Use the same directory that was used to create the application (for example, odr-metro-rbd ). After deleting the application, navigate to the Protected applications tab and verify that the busybox resources are both in Healthy status. 3.16.4.2. Relocate disaster recovery protected discovered application This section guides you on how to relocate a discovered application which is disaster recovery protected. Procedure Disable fencing on the Hub cluster. Edit the DRCluster resource for this cluster, replacing <drcluster_name> with a unique name. Example output: Gracefully reboot OpenShift Container Platform nodes that were Fenced . A reboot is required to resume the I/O operations after unfencing to avoid any further recovery orchestration failures. Reboot all nodes of the cluster by following the steps in the procedure, Rebooting a node gracefully . Note Make sure that all the nodes are initially cordoned and drained before you reboot and perform uncordon operations on the nodes. After all OpenShift nodes are rebooted and are in a Ready status, verify that all Pods are in a healthy state by running this command on the Primary managed cluster (or whatever cluster has been Unfenced). Example output: The output for this query should be zero Pods before proceeding to the step. Important If there are Pods still in an unhealthy status because of severed storage communication, troubleshoot and resolve before continuing. Because the storage cluster is external to OpenShift, it also has to be properly recovered after a site outage for OpenShift applications to be healthy. Alternatively, you can use the OpenShift Web Console dashboards and Overview tab to assess the health of applications and the external ODF storage cluster. The detailed OpenShift Data Foundation dashboard is found by navigating to Storage Data Foundation . Verify that the Unfenced cluster is in a healthy state. Validate the fencing status in the Hub cluster for the Primary-managed cluster, replacing <drcluster_name> with a unique name. Example output: Login to your Ceph cluster and verify that the IPs that belong to the OpenShift Container Platform cluster nodes are NOT in the blocklist. Ensure that you do not see the IPs added during fencing. In the RHACM console, navigate to Disaster Recovery Protected applications tab. At the end of the application row, click on the Actions menu and choose to initiate Relocate . In the Relocate application modal window, review the status of the application and the target cluster. Click Initiate . Check the progression status of Relocate until the result is WaitOnUserToCleanup . The DRPC name can be identified by the unique Name configured in prior steps (for example, busybox-rbd ). Remove the busybox application from the Secondary managed cluster before Relocate to the Primary managed cluster is completed. Navigate to the cloned repository for busybox and run the following commands on the Secondary managed cluster where you relocated from. Use the same directory that was used to create the application (for example, odr-metro-rbd ). After deleting the application, navigate to the Protected applications tab and verify that the busybox resources are both in Healthy status. Verify that the busybox application is running on the Primary managed cluster . 3.16.5. Disable disaster recovery for protected applications This section guides you to disable disaster recovery resources when you want to delete the protected applications or when the application no longer needs to be protected. Procedure Login to the Hub cluster. List the DRPlacementControl (DRPC) resources. Each DRPC resource was created when the application was assigned a DR policy. Find the DRPC that has a name that includes the unique identifier that you chose when assigning a DR policy (for example, busybox-rbd ) and delete the DRPC. List the Placement resources. Each Placement resource was created when the application was assigned a DR policy. Find the Placement that has a name that includes the unique identifier that you chose when assigning a DR policy (for example, busybox-rbd-placement-1 ) and delete the Placement . 3.17. Recovering to a replacement cluster with Metro-DR When there is a failure with the primary cluster, you get the options to either repair, wait for the recovery of the existing cluster, or replace the cluster entirely if the cluster is irredeemable. This solution guides you when replacing a failed primary cluster with a new cluster and enables failback (relocate) to this new cluster. In these instructions, we are assuming that a RHACM managed cluster must be replaced after the applications have been installed and protected. For purposes of this section, the RHACM managed cluster is the replacement cluster , while the cluster that is not replaced is the surviving cluster and the new cluster is the recovery cluster . Note Replacement cluster recovery for Discovered applications is currently not supported. Only Managed applications are supported. Prerequisite Ensure that the Metro-DR environment has been configured with applications installed using Red Hat Advance Cluster Management (RHACM). Ensure that the applications are assigned a Data policy which protects them against cluster failure. Procedure Perform the following steps on the Hub cluster : Fence the replacement cluster by using the CLI terminal to edit the DRCluster resource, where <drcluster_name> is the replacement cluster name. Using the RHACM console, navigate to Applications and failover all protected applications from the failed cluster to the surviving cluster. Verify and ensure that all protected applications are now running on the surviving cluster. Note The PROGRESSION state for each application DRPlacementControl will show as Cleaning Up . This is expected if the replacement cluster is offline or down. Unfence the replacement cluster. Using the CLI terminal, edit the DRCluster resource, where <drcluster_name> is the replacement cluster name. Delete the DRCluster for the replacement cluster. Note Use --wait=false since the DRCluster will not be deleted until a later step. Disable disaster recovery on the Hub cluster for each protected application on the surviving cluster. For each application, edit the Placement and ensure that the surviving cluster is selected. Note For Subscription-based applications the associated Placement can be found in the same namespace on the hub cluster similar to the managed clusters. For ApplicationSets-based applications the associated Placement can be found in the openshift-gitops namespace on the hub cluster. Verify that the s3Profile is removed for the replacement cluster by running the following command on the surviving cluster for each protected application's VolumeReplicationGroup. After the protected application Placement resources are all configured to use the surviving cluster and replacement cluster s3Profile(s) removed from protected applications, all DRPlacementControl resources must be deleted from the Hub cluster . Note For Subscription-based applications the associated DRPlacementControl can be found in the same namespace as the managed clusters on the hub cluster. For ApplicationSets-based applications the associated DRPlacementControl can be found in the openshift-gitops namespace on the hub cluster. Verify that all DRPlacementControl resources are deleted before proceeding to the step. This command is a query across all namespaces. There should be no resources found. The last step is to edit each applications Placement and remove the annotation cluster.open-cluster-management.io/experimental-scheduling-disable: "true" . Repeat the process detailed in the last step and the sub-steps for every protected application on the surviving cluster. Disabling DR for protected applications is now completed. On the Hub cluster, run the following script to remove all disaster recovery configurations from the surviving cluster and the hub cluster . Note This script used the command oc delete project openshift-operators to remove the Disaster Recovery (DR) operators in this namespace on the hub cluster. If there are other non-DR operators in this namespace, you must install them again from OperatorHub. After the namespace openshift-operators is automatically created again, add the monitoring label back for collecting the disaster recovery metrics. On the surviving cluster, ensure that the object bucket created during the DR installation is deleted. Delete the object bucket if it was not removed by script. The name of the object bucket used for DR starts with odrbucket . On the RHACM console, navigate to Infrastructure Clusters view . Detach the replacement cluster. Create a new OpenShift cluster (recovery cluster) and import the new cluster into the RHACM console. For instructions, see Creating a cluster and Importing a target managed cluster to the hub cluster . Install OpenShift Data Foundation operator on the recovery cluster and connect it to the same external Ceph storage system as the surviving cluster. For detailed instructions, refer to Deploying OpenShift Data Foundation in external mode . Note Ensure that the OpenShift Data Foundation version is 4.15 (or greater) and the same version of OpenShift Data Foundation is on the surviving cluster. On the hub cluster, install the ODF Multicluster Orchestrator operator from OperatorHub. For instructions, see chapter on Installing OpenShift Data Foundation Multicluster Orchestrator operator . Using the RHACM console, navigate to Data Services Data policies . Select Create DRPolicy and name your policy. Select the recovery cluster and the surviving cluster . Create the policy. For instructions see chapter on Creating Disaster Recovery Policy on Hub cluster . Proceed to the step only after the status of DRPolicy changes to Validated . Apply the DRPolicy to the applications on the surviving cluster that were originally protected before the replacement cluster failed. Relocate the newly protected applications on the surviving cluster back to the new recovery (primary) cluster. Using the RHACM console, navigate to the Applications menu to perform the relocation. 3.18. Hub recovery using Red Hat Advanced Cluster Management [Technology preview] When your setup has active and passive Red Hat Advanced Cluster Management for Kubernetes (RHACM) hub clusters, and in case where the active hub is down, you can use the passive hub to failover or relocate the disaster recovery protected workloads. Important Hub recovery for Metro-DR is a Technology Preview feature and is subject to Technology Preview support limitations. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . 3.18.1. Configuring passive hub cluster To perform hub recovery in case the active hub is down or unreachable, follow the procedure in this section to configure the passive hub cluster and then failover or relocate the disaster recovery protected workloads. Procedure Ensure that RHACM operator and MultiClusterHub is installed on the passive hub cluster. See RHACM installation guide for instructions. After the operator is successfully installed, a popover with a message that the Web console update is available appears on the user interface. Click Refresh web console from this popover for the console changes to reflect. Before hub recovery, configure backup and restore. See Backup and restore topics of RHACM Business continuity guide. Install the multicluster orchestrator (MCO) operator along with Red Hat OpenShift GitOps operator on the passive RHACM hub prior to the restore. For instructions to restore your RHACM hub, see Installing OpenShift Data Foundation Multicluster Orchestrator operator . Ensure that .spec.cleanupBeforeRestore is set to None for the Restore.cluster.open-cluster-management.io resource. For details, see Restoring passive resources while checking for backups chapter of RHACM documentation. If SSL access across clusters was configured manually during setup, then re-configure SSL access across clusters. For instructions, see Configuring SSL access across clusters chapter. On the passive hub, add label to openshift-operators namespace to enable basic monitoring of VolumeSyncronizationDelay alert using this command. For alert details, see Disaster recovery alerts chapter. 3.18.2. Switching to passive hub cluster Use this procedure when active hub is down or unreachable. Procedure During the restore procedure, to avoid eviction of resources when ManifestWorks are not regenerated correctly, you can enlarge the AppliedManifestWork eviction grace period. On the passive hub cluster, check for existing global KlusterletConfig . If global KlusterletConfig exists then edit and set the value for appliedManifestWorkEvictionGracePeriod parameter to a larger value. For example, 24 hours or more. If global KlusterletConfig does not exist, then create the Klusterletconfig using the following yaml: The configuration will be propagated to all the managed clusters automatically. Restore the backups on the passive hub cluster. For information, see Restoring a hub cluster from backup. Important Recovering a failed hub to its passive instance will only restore applications and their DR protected state to its last scheduled backup. Any application that was DR protected after the last scheduled backup would need to be protected again on the new hub. Verify that the restore is complete. Verify that the Primary and Secondary managed clusters are successfully imported into the RHACM console and they are accessible. If any of the managed clusters are down or unreachable then they will not be successfully imported. Wait until DRPolicy validation succeeds. Verify that the DRPolicy is created successfully. Run this command on the Hub cluster for each of the DRPolicy resources created, where <drpolicy_name> is replaced with a unique name. Example output: Refresh the RHACM console to make the DR monitoring dashboard tab accessible if it was enabled on the Active hub cluster. Once all components are recovered, edit the global KlusterletConfig on the new hub and remove the parameter appliedManifestWorkEvictionGracePeriod and its value. If only the active hub cluster is down, restore the hub by performing hub recovery, and restoring the backups on the passive hub. If the managed clusters are still accessible, no further action is required. If the primary managed cluster is down, along with the active hub cluster, you need to fail over the workloads from the primary managed cluster to the secondary managed cluster. For failover instructions, based on your workload type, see Subscription-based applications or ApplicationSet-based applications . Verify that the failover is successful. If the Primary managed cluster is also down, then the PROGRESSION status for the workload would be in Cleaning Up phase until the down Primary managed cluster is back online and successfully imported into the RHACM console. On the passive hub cluster, run the following command to check the PROGRESSION status. | [
"subscription-manager register subscription-manager subscribe --pool=8a8XXXXXX9e0",
"subscription-manager repos --disable=\"*\" --enable=\"rhel9-for-x86_64-baseos-rpms\" --enable=\"rhel9-for-x86_64-appstream-rpms\"",
"dnf update -y reboot",
"subscription-manager repos --enable=\"ansible-2.9-for-rhel-9-x86_64-rpms\" --enable=\"rhceph-6-tools-for-rhel-9-x86_64-rpms\"",
"hostnamectl set-hostname <short_name>",
"hostname",
"ceph1",
"DOMAIN=\"example.domain.com\" cat <<EOF >/etc/hosts 127.0.0.1 USD(hostname).USD{DOMAIN} USD(hostname) localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 USD(hostname).USD{DOMAIN} USD(hostname) localhost6 localhost6.localdomain6 EOF",
"hostname -f",
"ceph1.example.domain.com",
"sudo dnf install -y cephadm-ansible",
"cat <<EOF > ~/.ssh/config Host ceph* User deployment-user IdentityFile ~/.ssh/ceph.pem EOF",
"cat <<EOF > /usr/share/cephadm-ansible/inventory ceph1 ceph2 ceph3 ceph4 ceph5 ceph6 ceph7 [admin] ceph1 ceph4 EOF",
"ansible -i /usr/share/cephadm-ansible/inventory -m ping all -b",
"ceph6 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph4 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph3 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph2 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph5 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph1 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph7 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" }",
"ansible-playbook -i /usr/share/cephadm-ansible/inventory /usr/share/cephadm-ansible/cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"",
"cat <<EOF > /root/registry.json { \"url\":\"registry.redhat.io\", \"username\":\"User\", \"password\":\"Pass\" } EOF",
"cat <<EOF > /root/cluster-spec.yaml service_type: host addr: 10.0.40.78 ## <XXX.XXX.XXX.XXX> hostname: ceph1 ## <ceph-hostname-1> location: root: default datacenter: DC1 labels: - osd - mon - mgr --- service_type: host addr: 10.0.40.35 hostname: ceph2 location: datacenter: DC1 labels: - osd - mon --- service_type: host addr: 10.0.40.24 hostname: ceph3 location: datacenter: DC1 labels: - osd - mds - rgw --- service_type: host addr: 10.0.40.185 hostname: ceph4 location: root: default datacenter: DC2 labels: - osd - mon - mgr --- service_type: host addr: 10.0.40.88 hostname: ceph5 location: datacenter: DC2 labels: - osd - mon --- service_type: host addr: 10.0.40.66 hostname: ceph6 location: datacenter: DC2 labels: - osd - mds - rgw --- service_type: host addr: 10.0.40.221 hostname: ceph7 labels: - mon --- service_type: mon placement: label: \"mon\" --- service_type: mds service_id: cephfs placement: label: \"mds\" --- service_type: mgr service_name: mgr placement: label: \"mgr\" --- service_type: osd service_id: all-available-devices service_name: osd.all-available-devices placement: label: \"osd\" spec: data_devices: all: true --- service_type: rgw service_id: objectgw service_name: rgw.objectgw placement: count: 2 label: \"rgw\" spec: rgw_frontend_port: 8080 EOF",
"ip a | grep 10.0.40",
"10.0.40.78",
"cephadm bootstrap --ssh-user=deployment-user --mon-ip 10.0.40.78 --apply-spec /root/cluster-spec.yaml --registry-json /root/registry.json",
"You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid dd77f050-9afe-11ec-a56c-029f8148ea14 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/pacific/mgr/telemetry/",
"ceph -s",
"cluster: id: 3a801754-e01f-11ec-b7ab-005056838602 health: HEALTH_OK services: mon: 5 daemons, quorum ceph1,ceph2,ceph4,ceph5,ceph7 (age 4m) mgr: ceph1.khuuot(active, since 5m), standbys: ceph4.zotfsp osd: 12 osds: 12 up (since 3m), 12 in (since 4m) rgw: 2 daemons active (2 hosts, 1 zones) data: pools: 5 pools, 107 pgs objects: 191 objects, 5.3 KiB usage: 105 MiB used, 600 GiB / 600 GiB avail 105 active+clean",
"ceph orch host ls",
"HOST ADDR LABELS STATUS ceph1 10.0.40.78 _admin osd mon mgr ceph2 10.0.40.35 osd mon ceph3 10.0.40.24 osd mds rgw ceph4 10.0.40.185 osd mon mgr ceph5 10.0.40.88 osd mon ceph6 10.0.40.66 osd mds rgw ceph7 10.0.40.221 mon",
"ceph orch ps | grep mon | awk '{print USD1 \" \" USD2}'",
"mon.ceph1 ceph1 mon.ceph2 ceph2 mon.ceph4 ceph4 mon.ceph5 ceph5 mon.ceph7 ceph7",
"ceph orch ps | grep mgr | awk '{print USD1 \" \" USD2}'",
"mgr.ceph2.ycgwyz ceph2 mgr.ceph5.kremtt ceph5",
"ceph osd tree",
"ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.87900 root default -16 0.43950 datacenter DC1 -11 0.14650 host ceph1 2 ssd 0.14650 osd.2 up 1.00000 1.00000 -3 0.14650 host ceph2 3 ssd 0.14650 osd.3 up 1.00000 1.00000 -13 0.14650 host ceph3 4 ssd 0.14650 osd.4 up 1.00000 1.00000 -17 0.43950 datacenter DC2 -5 0.14650 host ceph4 0 ssd 0.14650 osd.0 up 1.00000 1.00000 -9 0.14650 host ceph5 1 ssd 0.14650 osd.1 up 1.00000 1.00000 -7 0.14650 host ceph6 5 ssd 0.14650 osd.5 up 1.00000 1.00000",
"ceph osd pool create 32 32 ceph osd pool application enable rbdpool rbd",
"ceph osd lspools | grep rbdpool",
"3 rbdpool",
"ceph orch ps | grep mds",
"mds.cephfs.ceph3.cjpbqo ceph3 running (17m) 117s ago 17m 16.1M - 16.2.9 mds.cephfs.ceph6.lqmgqt ceph6 running (17m) 117s ago 17m 16.1M - 16.2.9",
"ceph fs volume create cephfs",
"ceph fs status",
"cephfs - 0 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs.ceph6.ggjywj Reqs: 0 /s 10 13 12 0 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 96.0k 284G cephfs.cephfs.data data 0 284G STANDBY MDS cephfs.ceph3.ogcqkl",
"ceph orch ps | grep rgw",
"rgw.objectgw.ceph3.kkmxgb ceph3 *:8080 running (7m) 3m ago 7m 52.7M - 16.2.9 rgw.objectgw.ceph6.xmnpah ceph6 *:8080 running (7m) 3m ago 7m 53.3M - 16.2.9",
"ceph mon dump | grep election_strategy",
"dumped monmap epoch 9 election_strategy: 1",
"ceph mon set election_strategy connectivity",
"ceph mon dump | grep election_strategy",
"dumped monmap epoch 10 election_strategy: 3",
"ceph mon set_location ceph1 datacenter=DC1 ceph mon set_location ceph2 datacenter=DC1 ceph mon set_location ceph4 datacenter=DC2 ceph mon set_location ceph5 datacenter=DC2 ceph mon set_location ceph7 datacenter=DC3",
"ceph mon dump",
"epoch 17 fsid dd77f050-9afe-11ec-a56c-029f8148ea14 last_changed 2022-03-04T07:17:26.913330+0000 created 2022-03-03T14:33:22.957190+0000 min_mon_release 16 (pacific) election_strategy: 3 0: [v2:10.0.143.78:3300/0,v1:10.0.143.78:6789/0] mon.ceph1; crush_location {datacenter=DC1} 1: [v2:10.0.155.185:3300/0,v1:10.0.155.185:6789/0] mon.ceph4; crush_location {datacenter=DC2} 2: [v2:10.0.139.88:3300/0,v1:10.0.139.88:6789/0] mon.ceph5; crush_location {datacenter=DC2} 3: [v2:10.0.150.221:3300/0,v1:10.0.150.221:6789/0] mon.ceph7; crush_location {datacenter=DC3} 4: [v2:10.0.155.35:3300/0,v1:10.0.155.35:6789/0] mon.ceph2; crush_location {datacenter=DC1}",
"dnf -y install ceph-base",
"ceph osd getcrushmap > /etc/ceph/crushmap.bin",
"crushtool -d /etc/ceph/crushmap.bin -o /etc/ceph/crushmap.txt",
"vim /etc/ceph/crushmap.txt",
"rule stretch_rule { id 1 type replicated min_size 1 max_size 10 step take default step choose firstn 0 type datacenter step chooseleaf firstn 2 type host step emit } end crush map",
"crushtool -c /etc/ceph/crushmap.txt -o /etc/ceph/crushmap2.bin",
"ceph osd setcrushmap -i /etc/ceph/crushmap2.bin",
"17",
"ceph osd crush rule ls",
"replicated_rule stretch_rule",
"ceph mon enable_stretch_mode ceph7 stretch_rule datacenter",
"for pool in USD(rados lspools);do echo -n \"Pool: USD{pool}; \";ceph osd pool get USD{pool} crush_rule;done",
"Pool: device_health_metrics; crush_rule: stretch_rule Pool: cephfs.cephfs.meta; crush_rule: stretch_rule Pool: cephfs.cephfs.data; crush_rule: stretch_rule Pool: .rgw.root; crush_rule: stretch_rule Pool: default.rgw.log; crush_rule: stretch_rule Pool: default.rgw.control; crush_rule: stretch_rule Pool: default.rgw.meta; crush_rule: stretch_rule Pool: rbdpool; crush_rule: stretch_rule",
"ceph orch ps | grep rgw.objectgw",
"rgw.objectgw.ceph3.mecpzm ceph3 *:8080 running (5d) 31s ago 7w 204M - 16.2.7-112.el8cp rgw.objectgw.ceph6.mecpzm ceph6 *:8080 running (5d) 31s ago 7w 204M - 16.2.7-112.el8cp",
"host ceph3.example.com host ceph6.example.com",
"ceph3.example.com has address 10.0.40.24 ceph6.example.com has address 10.0.40.66",
"python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbdpool --cephfs-filesystem-name cephfs --cephfs-data-pool-name cephfs.cephfs.data --cephfs-metadata-pool-name cephfs.cephfs.meta --<rgw-endpoint> XXX.XXX.XXX.XXX:8080 --run-as-user client.odf.cluster1 > ocp-cluster1.json",
"python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbdpool --cephfs-filesystem-name cephfs --cephfs-data-pool-name cephfs.cephfs.data --cephfs-metadata-pool-name cephfs.cephfs.meta --rgw-endpoint XXX.XXX.XXX.XXX:8080 --run-as-user client.odf.cluster2 > ocp-cluster2.json",
"oc get storagecluster -n openshift-storage ocs-external-storagecluster -o jsonpath='{.status.phase}{\"\\n\"}'",
"oc get noobaa -n openshift-storage noobaa -o jsonpath='{.status.phase}{\"\\n\"}'",
"oc label nodes --all metro-dr.openshift-storage.topology.io/datacenter=DC1",
"oc patch storageclusters.ocs.openshift.io -n openshift-storage ocs-external-storagecluster -p '{\"spec\":{\"csi\":{\"readAffinity\":{\"enabled\":true,\"crushLocationLabels\":[\"metro-dr.openshift-storage.topology.io/datacenter\"]}}}}' --type=merge",
"oc delete po -n openshift-storage -l 'app in (csi-cephfsplugin,csi-rbdplugin)'",
"oc label nodes --all metro-dr.openshift-storage.topology.io/datacenter=DC2",
"oc patch storageclusters.ocs.openshift.io -n openshift-storage ocs-external-storagecluster -p '{\"spec\":{\"csi\":{\"readAffinity\":{\"enabled\":true,\"crushLocationLabels\":[\"metro-dr.openshift-storage.topology.io/datacenter\"]}}}}' --type=merge",
"oc delete po -n openshift-storage -l 'app in (csi-cephfsplugin,csi-rbdplugin)'",
"oc get pods -n openshift-operators",
"NAME READY STATUS RESTARTS AGE odf-multicluster-console-6845b795b9-blxrn 1/1 Running 0 4d20h odfmo-controller-manager-f9d9dfb59-jbrsd 1/1 Running 0 4d20h ramen-hub-operator-6fb887f885-fss4w 2/2 Running 0 4d20h",
"oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath=\"{['data']['ca-bundle\\.crt']}\" > primary.crt",
"oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath=\"{['data']['ca-bundle\\.crt']}\" > secondary.crt",
"apiVersion: v1 data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- <copy contents of cert1 from primary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert2 from primary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert3 primary.crt here> -----END CERTIFICATE---- -----BEGIN CERTIFICATE----- <copy contents of cert1 from secondary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert2 from secondary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert3 from secondary.crt here> -----END CERTIFICATE----- kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config",
"oc create -f cm-clusters-crt.yaml",
"configmap/user-ca-bundle created",
"oc patch proxy cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"user-ca-bundle\"}}}'",
"proxy.config.openshift.io/cluster patched",
"oc get drpolicy <drpolicy_name> -o jsonpath='{.status.conditions[].reason}{\"\\n\"}'",
"Succeeded",
"oc get drclusters",
"NAME AGE ocp4perf1 4m42s ocp4perf2 4m42s",
"oc get drcluster <drcluster_name> -o jsonpath='{.status.conditions[2].reason}{\"\\n\"}'",
"Succeeded",
"oc get csv,pod -n openshift-dr-system",
"NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/odr-cluster-operator.v4.15.0 Openshift DR Cluster Operator 4.15.0 Succeeded clusterserviceversion.operators.coreos.com/volsync-product.v0.8.0 VolSync 0.8.0 Succeeded NAME READY STATUS RESTARTS AGE pod/ramen-dr-cluster-operator-6467cf5d4c-cc8kz 2/2 Running 0 3d12h",
"get secrets -n openshift-dr-system | grep Opaque",
"get cm -n openshift-operators ramen-hub-operator-config -oyaml",
"oc get nodes -o jsonpath='{range .items[*]}{.status.addresses[?(@.type==\"ExternalIP\")].address}{\"\\n\"}{end}'",
"10.70.56.118 10.70.56.193 10.70.56.154 10.70.56.242 10.70.56.136 10.70.56.99",
"oc get drcluster",
"NAME AGE ocp4perf1 5m35s ocp4perf2 5m35s",
"oc edit drcluster <drcluster_name>",
"apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: s3ProfileName: s3profile-<drcluster_name>-ocs-external-storagecluster ## Add this section cidrs: - <IP_Address1>/32 - <IP_Address2>/32 - <IP_Address3>/32 - <IP_Address4>/32 - <IP_Address5>/32 - <IP_Address6>/32 [...]",
"drcluster.ramendr.openshift.io/ocp4perf1 edited",
"oc edit drcluster <drcluster_name>",
"apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: ## Add this section annotations: drcluster.ramendr.openshift.io/storage-clusterid: openshift-storage drcluster.ramendr.openshift.io/storage-driver: openshift-storage.rbd.csi.ceph.com drcluster.ramendr.openshift.io/storage-secret-name: rook-csi-rbd-provisioner drcluster.ramendr.openshift.io/storage-secret-namespace: openshift-storage [...]",
"drcluster.ramendr.openshift.io/ocp4perf1 edited",
"oc get pods,pvc -n busybox-sample",
"NAME READY STATUS RESTARTS AGE pod/busybox-67bf494b9-zl5tr 1/1 Running 0 77s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/busybox-pvc Bound pvc-c732e5fe-daaf-4c4d-99dd-462e04c18412 5Gi RWO ocs-storagecluster-ceph-rbd 77s",
"tolerations: - key: cluster.open-cluster-management.io/unreachable operator: Exists - key: cluster.open-cluster-management.io/unavailable operator: Exists",
"oc edit drcluster <drcluster_name>",
"apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: ## Add this line clusterFence: Fenced cidrs: [...] [...]",
"drcluster.ramendr.openshift.io/ocp4perf1 edited",
"oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{\"\\n\"}'",
"Fenced",
"ceph osd blocklist ls",
"cidr:10.1.161.1:0/32 2028-10-30T22:30:03.585634+0000 cidr:10.1.161.14:0/32 2028-10-30T22:30:02.483561+0000 cidr:10.1.161.51:0/32 2028-10-30T22:30:01.272267+0000 cidr:10.1.161.63:0/32 2028-10-30T22:30:05.099655+0000 cidr:10.1.161.129:0/32 2028-10-30T22:29:58.335390+0000 cidr:10.1.161.130:0/32 2028-10-30T22:29:59.861518+0000",
"oc edit drcluster <drcluster_name>",
"apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: ## Add this line clusterFence: Fenced cidrs: [...] [...]",
"drcluster.ramendr.openshift.io/ocp4perf1 edited",
"oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{\"\\n\"}'",
"Fenced",
"ceph osd blocklist ls",
"cidr:10.1.161.1:0/32 2028-10-30T22:30:03.585634+0000 cidr:10.1.161.14:0/32 2028-10-30T22:30:02.483561+0000 cidr:10.1.161.51:0/32 2028-10-30T22:30:01.272267+0000 cidr:10.1.161.63:0/32 2028-10-30T22:30:05.099655+0000 cidr:10.1.161.129:0/32 2028-10-30T22:29:58.335390+0000 cidr:10.1.161.130:0/32 2028-10-30T22:29:59.861518+0000",
"oc edit drcluster <drcluster_name>",
"apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: cidrs: [...] ## Modify this line clusterFence: Unfenced [...] [...]",
"drcluster.ramendr.openshift.io/ocp4perf1 edited",
"get pods -A | egrep -v 'Running|Completed'",
"NAMESPACE NAME READY STATUS RESTARTS AGE",
"oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{\"\\n\"}'",
"Unfenced",
"ceph osd blocklist ls",
"oc edit drcluster <drcluster_name>",
"apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: cidrs: [...] ## Modify this line clusterFence: Unfenced [...] [...]",
"drcluster.ramendr.openshift.io/ocp4perf1 edited",
"get pods -A | egrep -v 'Running|Completed'",
"NAMESPACE NAME READY STATUS RESTARTS AGE",
"oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{\"\\n\"}'",
"Unfenced",
"ceph osd blocklist ls",
"oc get configmap user-ca-bundle -n openshift-config -o jsonpath=\"{['data']['ca-bundle\\.crt']}\" |base64 -w 0",
"oc edit configmap ramen-hub-operator-config -n openshift-operators",
"[...] ramenOpsNamespace: openshift-dr-ops s3StoreProfiles: - s3Bucket: odrbucket-36bceb61c09c s3CompatibleEndpoint: https://s3-openshift-storage.apps.hyper3.vmw.ibmfusion.eu s3ProfileName: s3profile-hyper3-ocs-storagecluster s3Region: noobaa s3SecretRef: name: 60f2ea6069e168346d5ad0e0b5faa59bb74946f caCertificates: {input base64 encoded value here} - s3Bucket: odrbucket-36bceb61c09c s3CompatibleEndpoint: https://s3-openshift-storage.apps.hyper4.vmw.ibmfusion.eu s3ProfileName: s3profile-hyper4-ocs-storagecluster s3Region: noobaa s3SecretRef: name: cc237eba032ad5c422fb939684eb633822d7900 caCertificates: {input base64 encoded value here}",
"oc get secrets -n openshift-adp NAME TYPE DATA AGE v60f2ea6069e168346d5ad0e0b5faa59bb74946f Opaque 1 3d20h vcc237eba032ad5c422fb939684eb633822d7900 Opaque 1 3d20h [...]",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: labels: app.kubernetes.io/component: velero name: velero namespace: openshift-adp spec: backupImages: false configuration: nodeAgent: enable: false uploaderType: restic velero: defaultPlugins: - openshift - aws noDefaultBackupLocation: true",
"oc create -f dpa.yaml -n openshift-adp",
"dataprotectionapplication.oadp.openshift.io/velero created",
"oc get pods,dpa -n openshift-adp NAME READY STATUS RESTARTS AGE pod/openshift-adp-controller-manager-7b64b74fcd-msjbs 1/1 Running 0 5m30s pod/velero-694b5b8f5c-b4kwg 1/1 Running 0 3m31s NAME AGE dataprotectionapplication.oadp.openshift.io/velero 3m31s",
"git clone https://github.com/red-hat-storage/ocm-ramen-samples.git",
"cd ~/ocm-ramen-samples git branch * main",
"ls workloads/deployment | egrep -v 'cephfs|k8s|base' odr-metro-rbd odr-regional-rbd",
"oc new-project busybox-discovered",
"oc apply -k workloads/deployment/odr-metro-rbd -n busybox-discovered persistentvolumeclaim/busybox-pvc created deployment.apps/busybox created",
"oc get pods,pvc,deployment -n busybox-discovered",
"NAME READY STATUS RESTARTS AGE pod/busybox-796fccbb95-qmxjf 1/1 Running 0 18s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE persistentvolumeclaim/busybox-pvc Bound pvc-b20e4129-902d-47c7-b962-040ad64130c4 1Gi RWO ocs-storagecluster-ceph-rbd <unset> 18s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/busybox 1/1 1 1 18",
"oc get drpc {drpc_name} -o wide -n openshift-dr-ops",
"oc get vrg {vrg_name} -n openshift-dr-ops",
"oc edit drcluster <drcluster_name>",
"apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: ## Add this line clusterFence: Fenced cidrs: [...] [...]",
"drcluster.ramendr.openshift.io/ocp4perf1 edited",
"oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{\"\\n\"}'",
"Fenced",
"ceph osd blocklist ls",
"cidr:10.1.161.1:0/32 2028-10-30T22:30:03.585634+0000 cidr:10.1.161.14:0/32 2028-10-30T22:30:02.483561+0000 cidr:10.1.161.51:0/32 2028-10-30T22:30:01.272267+0000 cidr:10.1.161.63:0/32 2028-10-30T22:30:05.099655+0000 cidr:10.1.161.129:0/32 2028-10-30T22:29:58.335390+0000 cidr:10.1.161.130:0/32 2028-10-30T22:29:59.861518+0000",
"oc get pods,pvc -n busybox-discovered NAME READY STATUS RESTARTS AGE pod/busybox-796fccbb95-qmxjf 1/1 Running 0 2m46s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE persistentvolumeclaim/busybox-pvc Bound pvc-b20e4129-902d-47c7-b962-040ad64130c4 1Gi RWO ocs-storagecluster-ceph-rbd <unset> 2m57s",
"oc get drpc {drpc_name} -n openshift-dr-ops -o jsonpath='{.status.progression}{\"\\n\"}' WaitOnUserToCleanUp",
"cd ~/ocm-ramen-samples/ git branch * main oc delete -k workloads/deployment/odr-metro-rbd -n busybox-discovered persistentvolumeclaim \"busybox-pvc\" deleted deployment.apps \"busybox\" deleted",
"oc edit drcluster <drcluster_name>",
"apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: cidrs: [...] ## Modify this line clusterFence: Unfenced [...] [...]",
"drcluster.ramendr.openshift.io/ocp4perf1 edited",
"get pods -A | egrep -v 'Running|Completed'",
"NAMESPACE NAME READY STATUS RESTARTS AGE",
"oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{\"\\n\"}'",
"Unfenced",
"ceph osd blocklist ls",
"oc get drpc {drpc_name} -n openshift-dr-ops -o jsonpath='{.status.progression}{\"\\n\"}' WaitOnUserToCleanUp",
"cd ~/ocm-ramen-samples/ git branch * main oc delete -k workloads/deployment/odr-metro-rbd -n busybox-discovered persistentvolumeclaim \"busybox-pvc\" deleted deployment.apps \"busybox\" deleted",
"oc get pods,pvc -n busybox-discovered NAME READY STATUS RESTARTS AGE pod/busybox-796fccbb95-qmxjf 1/1 Running 0 2m46s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE persistentvolumeclaim/busybox-pvc Bound pvc-b20e4129-902d-47c7-b962-040ad64130c4 1Gi RWO ocs-storagecluster-ceph-rbd <unset> 2m57s",
"oc get drpc -n openshift-dr-ops",
"oc delete {drpc_name} -n openshift-dr-ops",
"oc get placements -n openshift-dr-ops",
"oc delete placements {placement_name} -n openshift-dr-ops",
"edit drcluster <drcluster_name>",
"apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: ## Add or modify this line clusterFence: Fenced cidrs: [...] [...]",
"oc edit drcluster <drcluster_name>",
"apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: ## Modify this line clusterFence: Unfenced cidrs: [...] [...]",
"oc delete drcluster <drcluster_name> --wait=false",
"oc edit placement <placement_name> -n <namespace>",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: annotations: cluster.open-cluster-management.io/experimental-scheduling-disable: \"true\" [...] spec: clusterSets: - submariner predicates: - requiredClusterSelector: claimSelector: {} labelSelector: matchExpressions: - key: name operator: In values: - cluster1 <-- Modify to be surviving cluster name [...]",
"oc get vrg -n <application_namespace> -o jsonpath='{.items[0].spec.s3Profiles}' | jq",
"oc delete drpc <drpc_name> -n <namespace>",
"oc get drpc -A",
"oc edit placement <placement_name> -n <namespace>",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: annotations: ## Remove this annotation cluster.open-cluster-management.io/experimental-scheduling-disable: \"true\" [...]",
"#!/bin/bash secrets=USD(oc get secrets -n openshift-operators | grep Opaque | cut -d\" \" -f1) echo USDsecrets for secret in USDsecrets do oc patch -n openshift-operators secret/USDsecret -p '{\"metadata\":{\"finalizers\":null}}' --type=merge done mirrorpeers=USD(oc get mirrorpeer -o name) echo USDmirrorpeers for mp in USDmirrorpeers do oc patch USDmp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge oc delete USDmp done drpolicies=USD(oc get drpolicy -o name) echo USDdrpolicies for drp in USDdrpolicies do oc patch USDdrp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge oc delete USDdrp done drclusters=USD(oc get drcluster -o name) echo USDdrclusters for drp in USDdrclusters do oc patch USDdrp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge oc delete USDdrp done delete project openshift-operators managedclusters=USD(oc get managedclusters -o name | cut -d\"/\" -f2) echo USDmanagedclusters for mc in USDmanagedclusters do secrets=USD(oc get secrets -n USDmc | grep multicluster.odf.openshift.io/secret-type | cut -d\" \" -f1) echo USDsecrets for secret in USDsecrets do set -x oc patch -n USDmc secret/USDsecret -p '{\"metadata\":{\"finalizers\":null}}' --type=merge oc delete -n USDmc secret/USDsecret done done delete clusterrolebinding spoke-clusterrole-bindings",
"oc label namespace openshift-operators openshift.io/cluster-monitoring='true'",
"oc get obc -n openshift-storage",
"oc label namespace openshift-operators openshift.io/cluster-monitoring='true'",
"apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: global spec: appliedManifestWorkEvictionGracePeriod: \"24h\"",
"oc -n <restore-namespace> wait restore <restore-name> --for=jsonpath='{.status.phase}'=Finished --timeout=120s",
"oc get drpolicy <drpolicy_name> -o jsonpath='{.status.conditions[].reason}{\"\\n\"}'",
"Succeeded",
"oc get drpc -o wide -A"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/metro-dr-solution |
Chapter 6. Using config maps with applications | Chapter 6. Using config maps with applications Config maps allow you to decouple configuration artifacts from image content to keep containerized applications portable. The following sections define config maps and how to create and use them. For information on creating config maps, see Creating and using config maps . 6.1. Understanding config maps Many applications require configuration using some combination of configuration files, command line arguments, and environment variables. In OpenShift Container Platform, these configuration artifacts are decoupled from image content to keep containerized applications portable. The ConfigMap object provides mechanisms to inject containers with configuration data while keeping containers agnostic of OpenShift Container Platform. A config map can be used to store fine-grained information like individual properties or coarse-grained information like entire configuration files or JSON blobs. The ConfigMap API object holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data for system components such as controllers. For example: ConfigMap Object Definition kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: default data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2 1 Contains the configuration data. 2 Points to a file that contains non-UTF8 data, for example, a binary Java keystore file. Enter the file data in Base 64. Note You can use the binaryData field when you create a config map from a binary file, such as an image. Configuration data can be consumed in pods in a variety of ways. A config map can be used to: Populate environment variable values in containers Set command-line arguments in a container Populate configuration files in a volume Users and system components can store configuration data in a config map. A config map is similar to a secret, but designed to more conveniently support working with strings that do not contain sensitive information. Config map restrictions A config map must be created before its contents can be consumed in pods. Controllers can be written to tolerate missing configuration data. Consult individual components configured by using config maps on a case-by-case basis. ConfigMap objects reside in a project. They can only be referenced by pods in the same project. The Kubelet only supports the use of a config map for pods it gets from the API server. This includes any pods created by using the CLI, or indirectly from a replication controller. It does not include pods created by using the OpenShift Container Platform node's --manifest-url flag, its --config flag, or its REST API because these are not common ways to create pods. 6.2. Use cases: Consuming config maps in pods The following sections describe some uses cases when consuming ConfigMap objects in pods. 6.2.1. Populating environment variables in containers by using config maps Config maps can be used to populate individual environment variables in containers or to populate environment variables in containers from all keys that form valid environment variable names. As an example, consider the following config map: ConfigMap with two environment variables apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4 1 Name of the config map. 2 The project in which the config map resides. Config maps can only be referenced by pods in the same project. 3 4 Environment variables to inject. ConfigMap with one environment variable apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2 1 Name of the config map. 2 Environment variable to inject. Procedure You can consume the keys of this ConfigMap in a pod using configMapKeyRef sections. Sample Pod specification configured to inject specific environment variables apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 restartPolicy: Never 1 Stanza to pull the specified environment variables from a ConfigMap . 2 Name of a Pod environment variable that you are injecting a key's value into. 3 5 Name of the ConfigMap to pull specific environment variables from. 4 6 Environment variable to pull from the ConfigMap . 7 Makes the environment variable optional. As optional, the Pod will be started even if the specified ConfigMap and keys do not exist. 8 Stanza to pull all environment variables from a ConfigMap . 9 Name of the ConfigMap to pull all environment variables from. When this Pod is run, the Pod logs will include the following output: Note SPECIAL_TYPE_KEY=charm is not listed in the example output because optional: true is set. 6.2.2. Setting command-line arguments for container commands with config maps A config map can also be used to set the value of the commands or arguments in a container. This is accomplished by using the Kubernetes substitution syntax USD(VAR_NAME) . Consider the following config map: apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm Procedure To inject values into a command in a container, you must consume the keys you want to use as environment variables, as in the consuming ConfigMaps in environment variables use case. Then you can refer to them in a container's command using the USD(VAR_NAME) syntax. Sample Pod specification configured to inject specific environment variables apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type restartPolicy: Never 1 Inject the values into a command in a container using the keys you want to use as environment variables. When this pod is run, the output from the echo command run in the test-container container is as follows: 6.2.3. Injecting content into a volume by using config maps You can inject content into a volume by using config maps. Example ConfigMap custom resource (CR) apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm Procedure You have a couple different options for injecting content into a volume by using config maps. The most basic way to inject content into a volume by using a config map is to populate the volume with files where the key is the file name and the content of the file is the value of the key: apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "cat", "/etc/config/special.how" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never 1 File containing key. When this pod is run, the output of the cat command will be: You can also control the paths within the volume where config map keys are projected: apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "cat", "/etc/config/path/to/special-key" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never 1 Path to config map key. When this pod is run, the output of the cat command will be: | [
"kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: default data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4",
"apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 restartPolicy: Never",
"SPECIAL_LEVEL_KEY=very log_level=INFO",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)\" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type restartPolicy: Never",
"very charm",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"cat\", \"/etc/config/special.how\" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never",
"very",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"cat\", \"/etc/config/path/to/special-key\" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never",
"very"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/applications/config-maps |
Chapter 6. Additional security privileges granted for kubevirt-controller and virt-launcher | Chapter 6. Additional security privileges granted for kubevirt-controller and virt-launcher The kubevirt-controller and virt-launcher pods are granted some SELinux policies and Security Context Constraints privileges that are in addition to typical pod owners. These privileges enable virtual machines to use OpenShift Virtualization features. 6.1. Extended SELinux policies for virt-launcher pods The container_t SELinux policy for virt-launcher pods is extended with the following rules: allow process self (tun_socket (relabelfrom relabelto attach_queue)) allow process sysfs_t (file (write)) allow process hugetlbfs_t (dir (add_name create write remove_name rmdir setattr)) allow process hugetlbfs_t (file (create unlink)) These rules enable the following virtualization features: Relabel and attach queues to its own TUN sockets, which is required to support network multi-queue. Multi-queue enables network performance to scale as the number of available vCPUs increases. Allows virt-launcher pods to write information to sysfs ( /sys ) files, which is required to enable Single Root I/O Virtualization (SR-IOV). Read/write hugetlbfs entries, which is required to support huge pages. Huge pages are a method of managing large amounts of memory by increasing the memory page size. 6.2. Additional OpenShift Container Platform security context constraints and Linux capabilities for the kubevirt-controller service account Security context constraints (SCCs) control permissions for pods. These permissions include actions that a pod, a collection of containers, can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with to be accepted into the system. The kubevirt-controller is a cluster controller that creates the virt-launcher pods for virtual machines in the cluster. These virt-launcher pods are granted permissions by the kubevirt-controller service account. 6.2.1. Additional SCCs granted to the kubevirt-controller service account The kubevirt-controller service account is granted additional SCCs and Linux capabilities so that it can create virt-launcher pods with the appropriate permissions. These extended permissions allow virtual machines to take advantage of OpenShift Virtualization features that are beyond the scope of typical pods. The kubevirt-controller service account is granted the following SCCs: scc.AllowHostDirVolumePlugin = true This allows virtual machines to use the hostpath volume plugin. scc.AllowPrivilegedContainer = false This ensures the virt-launcher pod is not run as a privileged container. scc.AllowedCapabilities = []corev1.Capability{"NET_ADMIN", "NET_RAW", "SYS_NICE"} This provides the following additional Linux capabilities NET_ADMIN , NET_RAW , and SYS_NICE . 6.2.2. Viewing the SCC and RBAC definitions for the kubevirt-controller You can view the SecurityContextConstraints definition for the kubevirt-controller by using the oc tool: USD oc get scc kubevirt-controller -o yaml You can view the RBAC definition for the kubevirt-controller clusterrole by using the oc tool: USD oc get clusterrole kubevirt-controller -o yaml 6.3. Additional resources The Red Hat Enterprise Linux Virtualization Tuning and Optimization Guide has more information on network multi-queue and huge pages . The capabilities man page has more information on the Linux capabilities. The sysfs(5) man page has more information on sysfs. The OpenShift Container Platform Authentication guide has more information on Security Context Constraints . | [
"oc get scc kubevirt-controller -o yaml",
"oc get clusterrole kubevirt-controller -o yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/virtualization/virt-additional-security-privileges-controller-and-launcher |
Chapter 4. Pools | Chapter 4. Pools Ceph clients store data in pools. When you create pools, you are creating an I/O interface for clients to store data. From the perspective of a Ceph client (that is, block device, gateway, and the rest), interacting with the Ceph storage cluster is remarkably simple: create a cluster handle and connect to the cluster; then, create an I/O context for reading and writing objects and their extended attributes. Create a Cluster Handle and Connect to the Cluster To connect to the Ceph storage cluster, the Ceph client needs the cluster name (usually ceph by default) and an initial monitor address. Ceph clients usually retrieve these parameters using the default path for the Ceph configuration file and then read it from the file, but a user might also specify the parameters on the command line too. The Ceph client also provides a user name and secret key (authentication is on by default). Then, the client contacts the Ceph monitor cluster and retrieves a recent copy of the cluster map, including its monitors, OSDs and pools. Create a Pool I/O Context To read and write data, the Ceph client creates an i/o context to a specific pool in the Ceph storage cluster. If the specified user has permissions for the pool, the Ceph client can read from and write to the specified pool. Ceph's architecture enables the storage cluster to provide this remarkably simple interface to Ceph clients so that clients might select one of the sophisticated storage strategies you define simply by specifying a pool name and creating an I/O context. Storage strategies are invisible to the Ceph client in all but capacity and performance. Similarly, the complexities of Ceph clients (mapping objects into a block device representation, providing an S3/Swift RESTful service) are invisible to the Ceph storage cluster. A pool provides you with: Resilience : You can set how many OSD are allowed to fail without losing data. For replicated pools, it is the desired number of copies/replicas of an object. A typical configuration stores an object and one additional copy (that is, size = 2 ), but you can determine the number of copies/replicas. For erasure coded pools, it is the number of coding chunks (that is m=2 in the erasure code profile ) Placement Groups : You can set the number of placement groups for the pool. A typical configuration uses approximately 50-100 placement groups per OSD to provide optimal balancing without using up too many computing resources. When setting up multiple pools, be careful to ensure you set a reasonable number of placement groups for both the pool and the cluster as a whole. CRUSH Rules : When you store data in a pool, a CRUSH rule mapped to the pool enables CRUSH to identify the rule for the placement of each object and its replicas (or chunks for erasure coded pools) in your cluster. You can create a custom CRUSH rule for your pool. Snapshots : When you create snapshots with ceph osd pool mksnap , you effectively take a snapshot of a particular pool. Quotas : When you set quotas on a pool with ceph osd pool set-quota you might limit the maximum number of objects or the maximum number of bytes stored in the specified pool. 4.1. Pools and Storage Strategies To manage pools, you can list, create, and remove pools. You can also view the utilization statistics for each pool. 4.2. List Pools To list your cluster's pools, execute: 4.3. Create a Pool Before creating pools, see the Pool, PG and CRUSH Configuration Reference chapter in the Configuration Guide for Red Hat Ceph Storage 4. Note In Red Hat Ceph Storage 3 and later releases, system administrators must expressly enable a pool to receive I/O operations from Ceph clients. See Enable Application for details. Failure to enable a pool will result in a HEALTH_WARN status. It is better to adjust the default value for the number of placement groups in the Ceph configuration file, as the default value does not have to suit your needs. For example: To create a replicated pool, execute: To create an erasure-coded pool, execute: Where: pool-name Description The name of the pool. It must be unique. Type String Required Yes. If not specified, it is set to the value listed in the Ceph configuration file or to the default value. Default ceph pg_num Description The total number of placement groups for the pool. See the Placement Groups section and the Ceph Placement Groups (PGs) per Pool Calculator for details on calculating a suitable number. The default value 8 is not suitable for most systems. Type Integer Required Yes Default 8 pgp_num Description The total number of placement groups for placement purposes. This value must be equal to the total number of placement groups, except for placement group splitting scenarios. Type Integer Required Yes. If not specified it is set to the value listed in the Ceph configuration file or to the default value. Default 8 replicated or erasure Description The pool type which can be either replicated to recover from lost OSDs by keeping multiple copies of the objects or erasure to get a kind of generalized RAID5 capability. The replicated pools require more raw storage but implement all Ceph operations. The erasure-coded pools require less raw storage but only implement a subset of the available operations. Type String Required No Default replicated crush-rule-name Description The name of the crush rule for the pool. The rule MUST exist. For replicated pools, the name is the rule specified by the osd_pool_default_crush_rule configuration setting. For erasure-coded pools the name is erasure-code if you specify the default erasure code profile or {pool-name} otherwise. Ceph creates this rule with the specified name implicitly if the rule doesn't already exist. Type String Required No Default Uses erasure-code for an erasure-coded pool. For replicated pools, it uses the value of the osd_pool_default_crush_rule variable from the Ceph configuration. expected-num-objects Description The expected number of objects for the pool. By setting this value together with a negative filestore_merge_threshold variable, Ceph splits the placement groups at pool creation time to avoid the latency impact to perform runtime directory splitting. Type Integer Required No Default 0 , no splitting at the pool creation time erasure-code-profile Description For erasure-coded pools only. Use the erasure code profile. It must be an existing profile as defined by the osd erasure-code-profile set variable in the Ceph configuration file. For further information, see the Erasure Code Profiles section. Type String Required No When you create a pool, set the number of placement groups to a reasonable value (for example to 100 ). Consider the total number of placement groups per OSD too. Placement groups are computationally expensive, so performance will degrade when you have many pools with many placement groups, for example, 50 pools with 100 placement groups each. The point of diminishing returns depends upon the power of the OSD host. See the Placement Groups section and Ceph Placement Groups (PGs) per Pool Calculator for details on calculating an appropriate number of placement groups for your pool. 4.4. Set Pool Quotas You can set pool quotas for the maximum number of bytes or the maximum number of objects per pool or for both. For example: To remove a quota, set its value to 0 . Note In-flight write operations might overrun pool quotas for a short time until Ceph propagates the pool usage across the cluster. This is normal behavior. Enforcing pool quotas on in-flight write operations would impose significant performance penalties. 4.5. Delete a Pool To delete a pool, execute: Important To protect data, in RHCS 3 and later releases, administrators cannot delete pools by default. Set the mon_allow_pool_delete configuration option before deleting pools. Important If the pool is used by Ceph Object Gateway, restart the RGW process after deleting the pool. If a pool has its own rule, consider removing it after deleting the pool. If a pool has users strictly for its own use, consider deleting those users after deleting the pool. 4.6. Rename a Pool To rename a pool, execute: If you rename a pool and you have per-pool capabilities for an authenticated user, you must update the user's capabilities (that is, caps) with the new pool name. 4.7. Show Pool Statistics To show a pool's utilization statistics, execute: 4.8. Set Pool Values To set a value to a pool, execute the following command: The Pool Values section lists all key-values pairs that you can set. 4.9. Get Pool Values To get a value from a pool, execute the following command: The Pool Values section lists all key-values pairs that you can get. 4.10. Enable Application RHCS 3 and later releases provide additional protection for pools to prevent unauthorized types of clients from writing data to the pool. This means that system administrators must expressly enable pools to receive I/O operations from Ceph Block Device, Ceph Object Gateway, Ceph Filesystem or for a custom application. To enable a client application to conduct I/O operations on a pool, execute the following: Where <app> is: cephfs for the Ceph Filesystem. rbd for the Ceph Block Device rgw for the Ceph Object Gateway Note Specify a different <app> value for a custom application. Important A pool that is not enabled will generate a HEALTH_WARN status. In that scenario, the output for ceph health detail -f json-pretty will output the following: NOTE Initialize pools for the Ceph Block Device with rbd pool init <pool-name> . 4.11. Disable Application To disable a client application from conducting I/O operations on a pool, execute the following: Where <app> is: cephfs for the Ceph Filesystem. rbd for the Ceph Block Device rgw for the Ceph Object Gateway Note Specify a different <app> value for a custom application. 4.12. Set Application Metadata RHCS 3 and later releases provide functionality to set key-value pairs describing attributes of the client application. To set client application metadata on a pool, execute the following: Where <app> is: cephfs for the Ceph Filesystem. rbd for the Ceph Block Device rgw for the Ceph Object Gateway Note Specify a different <app> value for a custom application. 4.13. Remove Application Metadata To remove client application metadata on a pool, execute the following: Where <app> is: cephfs for the Ceph Filesystem. rbd for the Ceph Block Device rgw for the Ceph Object Gateway Note Specify a different <app> value for a custom application. 4.14. Set the Number of Object Replicas To set the number of object replicas on a replicated pool, execute the following command: Important The <num-replicas> parameter includes the object itself. If you want to include the object and two copies of the object for a total of three instances of the object, specify 3 . For example: You can execute this command for each pool. Note An object might accept I/O operations in degraded mode with fewer replicas than specified by the pool size setting. To set a minimum number of required replicas for I/O, use the min_size setting. For example: This ensures that no object in the data pool will receive I/O with fewer replicas than specified by the min_size setting. 4.15. Get the Number of Object Replicas To get the number of object replicas, execute the following command: Ceph will list the pools, with the replicated size attribute highlighted. By default, Ceph creates two replicas of an object, that is a total of three copies, or a size of 3 . 4.16. Pool Values The following list contains key-values pairs that you can set or get. For further information, see the Set Pool Values and Get Pool Values sections. size Description Specifies the number of replicas for objects in the pool. See the Set the Number of Object Replicas section for further details. Applicable for the replicated pools only. Type Integer min_size Description Specifies the minimum number of replicas required for I/O. See the Set the Number of Object Replicas section for further details. Applicable for the replicated pools only. Type Integer crash_replay_interval Description Specifies the number of seconds to allow clients to replay acknowledged, but uncommitted requests. Type Integer pg-num Description The total number of placement groups for the pool. See the Pool, PG and CRUSH Configuration Reference section in the Red Hat Ceph Storage 4 Configuration Guide for details on calculating a suitable number. The default value 8 is not suitable for most systems. Type Integer Required Yes. Default 8 pgp-num Description The total number of placement groups for placement purposes. This should be equal to the total number of placement groups , except for placement group splitting scenarios. Type Integer Required Yes. Picks up default or Ceph configuration value if not specified. Default 8 Valid Range Equal to or less than what specified by the pg_num variable. crush_rule Description The rule to use for mapping object placement in the cluster. Type String hashpspool Description Enable or disable the HASHPSPOOL flag on a given pool. With this option enabled, pool hashing and placement group mapping are changed to improve the way pools and placement groups overlap. Type Integer Valid Range 1 enables the flag, 0 disables the flag. Important Do not enable this option on production pools of a cluster with a large amount of OSDs and data. All placement groups in the pool would have to be remapped causing too much data movement. fast_read Description On a pool that uses erasure coding, if this flag is enabled, the read request issues subsequent reads to all shards, and wait until it receives enough shards to decode to serve the client. In the case of the jerasure and isa erasure plug-ins, once the first K replies return, client's request is served immediately using the data decoded from these replies. This helps to allocate some resources for better performance. Currently this flag is only supported for erasure coding pools. Type Boolean Defaults 0 allow_ec_overwrites Description Whether writes to an erasure coded pool can update part of an object, so the Ceph Filesystem and Ceph Block Device can use it. Type Boolean Version RHCS 3 and later. compression_algorithm Description Sets inline compression algorithm to use with the BlueStore storage backend. This setting overrides the bluestore_compression_algorithm configuration setting. Type String Valid Settings lz4 , snappy , zlib , zstd compression_mode Description Sets the policy for the inline compression algorithm for the BlueStore storage backend. This setting overrides the bluestore_compression_mode configuration setting. Type String Valid Settings none , passive , aggressive , force compression_min_blob_size Description BlueStore will not compress chunks smaller than this size. This setting overrides the bluestore_compression_min_blob_size configuration setting. Type Unsigned Integer compression_max_blob_size Description BlueStore will break chunks larger than this size into smaller blobs of compression_max_blob_size before compressing the data. Type Unsigned Integer nodelete Description Set or unset the NODELETE flag on a given pool. Type Integer Valid Range 1 sets flag. 0 unsets flag. nopgchange Description Set or unset the NOPGCHANGE flag on a given pool. Type Integer Valid Range 1 sets the flag. 0 unsets the flag. nosizechange Description Set or unset the NOSIZECHANGE flag on a given pool. Type Integer Valid Range 1 sets the flag. 0 unsets the flag. write_fadvise_dontneed Description Set or unset the WRITE_FADVISE_DONTNEED flag on a given pool. Type Integer Valid Range 1 sets the flag. 0 unsets the flag. noscrub Description Set or unset the NOSCRUB flag on a given pool. Type Integer Valid Range 1 sets the flag. 0 unsets the flag. nodeep-scrub Description Set or unset the NODEEP_SCRUB flag on a given pool. Type Integer Valid Range 1 sets the flag. 0 unsets the flag. scrub_min_interval Description The minimum interval in seconds for pool scrubbing when load is low. If it is 0 , Ceph uses the osd_scrub_min_interval configuration setting. Type Double Default 0 scrub_max_interval Description The maximum interval in seconds for pool scrubbing irrespective of cluster load. If it is 0 , Ceph uses the osd_scrub_max_interval configuration setting. Type Double Default 0 deep_scrub_interval Description The interval in seconds for pool 'deep' scrubbing. If it is 0 , Ceph uses the osd_deep_scrub_interval configuration setting. Type Double Default 0 | [
"ceph osd lspools",
"osd pool default pg num = 100 osd pool default pgp num = 100",
"ceph osd pool create <pool-name> <pg-num> <pgp-num> [replicated] [crush-rule-name] [expected-num-objects]",
"ceph osd pool create <pool-name> <pg-num> <pgp-num> erasure [erasure-code-profile] [crush-rule-name] [expected-num-objects]",
"ceph osd pool set-quota <pool-name> [max_objects <obj-count>] [max_bytes <bytes>]",
"ceph osd pool set-quota data max_objects 10000",
"ceph osd pool delete <pool-name> [<pool-name> --yes-i-really-really-mean-it]",
"ceph osd pool rename <current-pool-name> <new-pool-name>",
"rados df",
"ceph osd pool set <pool-name> <key> <value>",
"ceph osd pool get <pool-name> <key>",
"ceph osd pool application enable <poolname> <app> {--yes-i-really-mean-it}",
"{ \"checks\": { \"POOL_APP_NOT_ENABLED\": { \"severity\": \"HEALTH_WARN\", \"summary\": { \"message\": \"application not enabled on 1 pool(s)\" }, \"detail\": [ { \"message\": \"application not enabled on pool '<pool-name>'\" }, { \"message\": \"use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.\" } ] } }, \"status\": \"HEALTH_WARN\", \"overall_status\": \"HEALTH_WARN\", \"detail\": [ \"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\" ] }",
"ceph osd pool application disable <poolname> <app> {--yes-i-really-mean-it}",
"ceph osd pool application set <poolname> <app> <key> <value>",
"ceph osd pool application set <poolname> <app> <key>",
"ceph osd pool set <poolname> size <num-replicas>",
"ceph osd pool set data size 3",
"ceph osd pool set data min_size 2",
"ceph osd dump | grep 'replicated size'"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/storage_strategies_guide/pools-1 |
Chapter 6. Additional configuration for identity and authentication providers | Chapter 6. Additional configuration for identity and authentication providers The System Security Services Daemon (SSSD) is a system service to access remote directories and authentication mechanisms. The main configuration file for SSSD is /etc/sssd/sssd.conf . The following chapters outline how you can configure SSSD services and domains by modifying the /etc/sssd/sssd.conf file to: Adjust how SSSD interprets and prints full user names to enable offline authentication. Configure DNS Service Discovery, simple Access Provider Rules, and SSSD to apply an LDAP Access Filter. 6.1. Adjusting how SSSD interprets full user names SSSD parses full user name strings into the user name and domain components. By default, SSSD interprets full user names in the format user_name@domain_name based on the following regular expression in Python syntax: Note For Identity Management and Active Directory providers, the default user name format is user_name@domain_name or NetBIOS_name\user_name . You can adjust how SSSD interprets full user names by adding the re_expression option to the /etc/sssd/sssd.conf file and defining a custom regular expression. To define the regular expression globally, add the regular expression to the [sssd] section of the sssd.conf file as shown in the Defining regular expressions globally example. To define the regular expression for a particular domain, add the regular expression to the corresponding domain section (for example, [domain/LDAP] ) of the sssd.conf file as shown in the Defining regular expressions a particular domain example. Prerequisites root access Procedure Open the /etc/sssd/sssd.conf file. Use the re_expression option to define a custom regular expression. Example 6.1. Defining regular expressions globally To define the regular expressions globally for all domains, add re_expression to the [sssd] section of the sssd.conf file. You can use the following global expression to define the username in the format of domain\\username or domain@username : Example 6.2. Defining regular expressions a particular domain To define the regular expressions individually for a particular domain, add re_expression to the corresponding domain section of the sssd.conf file. You can use the following global expression to define the username in the format of domain\\username or domain@username for the LDAP domain: For more details, see the descriptions for re_expression in the SPECIAL SECTIONS and DOMAIN SECTIONS parts of the sssd.conf(5) man page on your system. 6.2. Adjusting how SSSD prints full user names If the use_fully_qualified_names option is enabled in the /etc/sssd/sssd.conf file, SSSD prints full user names in the format name@domain based on the following expansion by default: Note If use_fully_qualified_names is not set or is explicitly set to false for trusted domains, it only prints the user name without the domain component. You can adjust the format in which SSSD prints full user names by adding the full_name_format option to the /etc/sssd/sssd.conf file and defining a custom expansion. Prerequisites root access Procedure As root , open the /etc/sssd/sssd.conf file. To define the expansion globally for all domains, add full_name_format to the [sssd] section of sssd.conf . In this case the user name is displayed as [email protected] . To define the user name printing format for a particular domain, add full_name_format to the corresponding domain section of sssd.conf . To configure the expansion for the Active Directory (AD) domain using %2USDs\%1USDs : In this case the user name is displayed as ad.domain\user . To configure the expansion for the Active Directory (AD) domain using %3USDs\%1USDs : In this case the user name is displayed as AD\user if the flat domain name of the Active Directory domain is set to AD . For more details, see the descriptions for full_name_format in the SPECIAL SECTIONS and DOMAIN SECTIONS parts of the sssd.conf(5) man page on your system. Note SSSD can strip the domain component of the name in some name configurations, which can cause authentication errors. If you set full_name_format to a non-standard value, you will get a warning prompting you to change it to a standard format. 6.3. Enabling offline authentication SSSD does not cache user credentials by default. When processing authentication requests, SSSD always contacts the identity provider. If the provider is unavailable, user authentication fails. To ensure that users can authenticate even when the identity provider is unavailable, you can enable credential caching by setting cache_credentials to true in the /etc/sssd/sssd.conf file. Cached credentials refer to passwords and the first authentication factor if two-factor authentication is used. Note that for passkey and smart card authentication, you do not need to set cache_credentials to true or set any additional configuration; they are expected to work offline as long as a successful online authentication is recorded in the cache. Important SSSD never caches passwords in plain text. It stores only a hash of the password. While credentials are stored as a salted SHA-512 hash, this potentially poses a security risk in case an attacker manages to access the cache file and break a password using a brute force attack. Accessing a cache file requires privileged access, which is the default on RHEL. Prerequisites root access Procedure Open the /etc/sssd/sssd.conf file. In a domain section, add the cache_credentials = true setting: Optional, but recommended : Configure a time limit for how long SSSD allows offline authentication if the identity provider is unavailable: Configure the PAM service to work with SSSD. See Configuring user authentication using authselect for more details. Use the offline_credentials_expiration option to specify the time limit. Note that the limit is set in days. For example, to specify that users are able to authenticate offline for 3 days since the last successful login, use: Additional resources sssd.conf(5) man page on your system 6.4. Configuring DNS Service Discovery DNS service discovery enables applications to check the SRV records in a given domain for certain services of a certain type, and then returns any servers that match the required type. If the identity or authentication server is not explicitly defined in the /etc/sssd/sssd.conf file, SSSD can discover the server dynamically using DNS service discovery. For example, if sssd.conf includes the id_provider = ldap setting, but the ldap_uri option does not specify any host name or IP address, SSSD uses DNS service discovery to discover the server dynamically. Note SSSD cannot dynamically discover backup servers, only the primary server. Prerequisites root access Procedure Open the /etc/sssd/sssd.conf file. Set the primary server value to _srv_ . For an LDAP provider, the primary server is set using the ldap_uri option: Enable service discovery in the password change provider by setting a service type: Optional: By default, the service discovery uses the domain portion of the system host name as the domain name. To use a different DNS domain, specify the domain name by using the dns_discovery_domain option. Optional: By default, the service discovery scans for the LDAP service type. To use a different service type, specify the type by using the ldap_dns_service_name option. Optional: By default, SSSD attempts to look up an IPv4 address. If the attempt fails, SSSD attempts to look up an IPv6 address. To customize this behavior, use the lookup_family_order option. For every service with which you want to use service discovery, add a DNS record to the DNS server: Additional resources RFC 2782 on DNS service discovery sssd.conf(5) man page on your system 6.5. Configuring simple Access Provider Rules The simple access provider allows or denies access based on a list of user names or groups. It enables you to restrict access to specific machines. For example, you can use the simple access provider to restrict access to a specific user or group. Other users or groups will not be allowed to log in even if they authenticate successfully against the configured authentication provider. Prerequisites root access Procedure Open the /etc/sssd/sssd.conf file. Set the access_provider option to simple : Define the access control rules for users. To allow access to users, use the simple_allow_users option. To deny access to users, use the simple_deny_users option. Important If you deny access to specific users, you automatically allow access to everyone else. Allowing access to specific users is considered safer than denying. Define the access control rules for groups. Choose one of the following: To allow access to groups, use the simple_allow_groups option. To deny access to groups, use the simple_deny_groups option. Important If you deny access to specific groups, you automatically allow access to everyone else. Allowing access to specific groups is considered safer than denying. Example 6.3. Allowing access to specific users and groups The following example allows access to user1, user2, and members of group1, while denying access to all other users: Important Keeping the deny list empty can lead to allowing access to everyone. Note If you are adding a trusted AD user to the simple_allow_users list, ensure that you use the fully qualified domain name (FQDN) format, for example, [email protected]. As short names in different domains can be the same, this prevents issues with the access control configuration. Additional resources sssd-simple man page on your system 6.6. Configuring SSSD to Apply an LDAP Access Filter When the access_provider option is set in /etc/sssd/sssd.conf , SSSD uses the specified access provider to evaluate which users are granted access to the system. If the access provider you are using is an extension of the LDAP provider type, you can also specify an LDAP access control filter that a user must match to be allowed access to the system. For example, when using the Active Directory (AD) server as the access provider, you can restrict access to the Linux system only to specified AD users. All other users that do not match the specified filter have access denied. Note The access filter is applied on the LDAP user entry only. Therefore, using this type of access control on nested groups might not work. To apply access control on nested groups, see Configuring simple Access Provider Rules . Important When using offline caching, SSSD checks if the user's most recent online login attempt was successful. Users who logged in successfully during the most recent online login will still be able to log in offline, even if they do not match the access filter. Prerequisites root access Procedure Open the /etc/sssd/sssd.conf file. In the [domain] section, specify the access control filter. For an LDAP, use the ldap_access_filter option. For an AD, use the ad_access_filter option. Additionally, you must disable the GPO-based access control by setting the ad_gpo_access_control option to disabled . Example 6.4. Allowing access to specific AD users For example, to allow access only to AD users who belong to the admins user group and have a unixHomeDirectory attribute set, use: SSSD can also check results by the authorizedService or host attribute in an entry. In fact, all options MDASH LDAP filter, authorizedService , and host MDASH can be evaluated, depending on the user entry and the configuration. The ldap_access_order parameter lists all access control methods to use, ordered as how they should be evaluated. Additional resources sssd-ldap(5) and sssd-ad(5) man pages on your system | [
"(?P<name>[^@]+)@?(?P<domain>[^@]*USD)",
"[sssd] [... file truncated ...] re_expression = (?P<domain>[^\\\\]*?)\\\\?(?P<name>[^\\\\]+USD)",
"[domain/LDAP] [... file truncated ...] re_expression = (?P<domain>[^\\\\]*?)\\\\?(?P<name>[^\\\\]+USD)",
"%1USDs@%2USDs",
"[sssd] [... file truncated ...] full_name_format = %1USDs@%2USDs",
"[domain/ad.domain] [... file truncated ...] full_name_format = %2USDs\\%1USDs",
"[domain/ad.domain] [... file truncated ...] full_name_format = %3USDs\\%1USDs",
"[domain/ your-domain-name ] cache_credentials = true",
"[pam] offline_credentials_expiration = 3",
"[domain/ your-domain-name ] id_provider = ldap ldap_uri = _srv_",
"[domain/ your-domain-name ] id_provider = ldap ldap_uri = _srv_ chpass_provider = ldap ldap_chpass_dns_service_name = ldap",
"_service._protocol._domain TTL priority weight port host_name",
"[domain/ your-domain-name ] access_provider = simple",
"[domain/ your-domain-name ] access_provider = simple simple_allow_users = user1, user2 simple_allow_groups = group1",
"[domain/ your-AD-domain-name ] access provider = ad [... file truncated ...] ad_access_filter = (&(memberOf=cn=admins,ou=groups,dc=example,dc=com)(unixHomeDirectory=*)) ad_gpo_access_control = disabled",
"[domain/example.com] access_provider = ldap ldap_access_filter = memberOf=cn=allowedusers,ou=Groups,dc=example,dc=com ldap_access_order = filter, host, authorized_service"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_authentication_and_authorization_in_rhel/assembly_additional-configuration-for-identity-and-authentication-providers_configuring-authentication-and-authorization-in-rhel |
Serverless | Serverless OpenShift Container Platform 4.18 OpenShift Serverless installation, usage, and release notes Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/serverless/index |
Chapter 14. Configuring a remote logging solution | Chapter 14. Configuring a remote logging solution To ensure that logs from various machines in your environment are recorded centrally on a logging server, you can configure the Rsyslog application to record logs that fit specific criteria from the client system to the server. 14.1. The Rsyslog logging service The Rsyslog application, in combination with the systemd-journald service, provides local and remote logging support in Red Hat Enterprise Linux. The rsyslogd daemon continuously reads syslog messages received by the systemd-journald service from the Journal. rsyslogd then filters and processes these syslog events and records them to rsyslog log files or forwards them to other services according to its configuration. The rsyslogd daemon also provides extended filtering, encryption protected relaying of messages, input and output modules, and support for transportation using the TCP and UDP protocols. In /etc/rsyslog.conf , which is the main configuration file for rsyslog , you can specify the rules according to which rsyslogd handles the messages. Generally, you can classify messages by their source and topic (facility) and urgency (priority), and then assign an action that should be performed when a message fits these criteria. In /etc/rsyslog.conf , you can also see a list of log files maintained by rsyslogd . Most log files are located in the /var/log/ directory. Some applications, such as httpd and samba , store their log files in a subdirectory within /var/log/ . Additional resources rsyslogd(8) and rsyslog.conf(5) man pages on your system Documentation installed with the rsyslog-doc package in the /usr/share/doc/rsyslog/html/index.html file 14.2. Installing Rsyslog documentation The Rsyslog application has extensive online documentation that is available at https://www.rsyslog.com/doc/ , but you can also install the rsyslog-doc documentation package locally. Prerequisites You have activated the AppStream repository on your system. You are authorized to install new packages using sudo . Procedure Install the rsyslog-doc package: Verification Open the /usr/share/doc/rsyslog/html/index.html file in a browser of your choice, for example: 14.3. Configuring a server for remote logging over TCP The Rsyslog application enables you to both run a logging server and configure individual systems to send their log files to the logging server. To use remote logging through TCP, configure both the server and the client. The server collects and analyzes the logs sent by one or more client systems. With the Rsyslog application, you can maintain a centralized logging system where log messages are forwarded to a server over the network. To avoid message loss when the server is not available, you can configure an action queue for the forwarding action. This way, messages that failed to be sent are stored locally until the server is reachable again. Note that such queues cannot be configured for connections using the UDP protocol. The omfwd plug-in provides forwarding over UDP or TCP. The default protocol is UDP. Because the plug-in is built in, it does not have to be loaded. By default, rsyslog uses TCP on port 514 . Prerequisites Rsyslog is installed on the server system. You are logged in as root on the server. The policycoreutils-python-utils package is installed for the optional step using the semanage command. The firewalld service is running. Procedure Optional: To use a different port for rsyslog traffic, add the syslogd_port_t SELinux type to port. For example, enable port 30514 : Optional: To use a different port for rsyslog traffic, configure firewalld to allow incoming rsyslog traffic on that port. For example, allow TCP traffic on port 30514 : Create a new file in the /etc/rsyslog.d/ directory named, for example, remotelog.conf , and insert the following content: # Define templates before the rules that use them # Per-Host templates for remote systems template(name="TmplAuthpriv" type="list") { constant(value="/var/log/remote/auth/") property(name="hostname") constant(value="/") property(name="programname" SecurePath="replace") constant(value=".log") } template(name="TmplMsg" type="list") { constant(value="/var/log/remote/msg/") property(name="hostname") constant(value="/") property(name="programname" SecurePath="replace") constant(value=".log") } # Provides TCP syslog reception module(load="imtcp") # Adding this ruleset to process remote messages ruleset(name="remote1"){ authpriv.* action(type="omfile" DynaFile="TmplAuthpriv") *.info;mail.none;authpriv.none;cron.none action(type="omfile" DynaFile="TmplMsg") } input(type="imtcp" port="30514" ruleset="remote1") Save the changes to the /etc/rsyslog.d/remotelog.conf file. Test the syntax of the /etc/rsyslog.conf file: Make sure the rsyslog service is running and enabled on the logging server: Restart the rsyslog service. Optional: If rsyslog is not enabled, ensure the rsyslog service starts automatically after reboot: Your log server is now configured to receive and store log files from the other systems in your environment. Additional resources rsyslogd(8) , rsyslog.conf(5) , semanage(8) , and firewall-cmd(1) man pages on your system Documentation installed with the rsyslog-doc package in the /usr/share/doc/rsyslog/html/index.html file 14.4. Configuring remote logging to a server over TCP You can configure a system for forwarding log messages to a server over the TCP protocol. The omfwd plug-in provides forwarding over UDP or TCP. The default protocol is UDP. Because the plug-in is built in, you do not have to load it. Prerequisites The rsyslog package is installed on the client systems that should report to the server. You have configured the server for remote logging. The specified port is permitted in SELinux and open in firewall. The system contains the policycoreutils-python-utils package, which provides the semanage command for adding a non-standard port to the SELinux configuration. Procedure Create a new file in the /etc/rsyslog.d/ directory named, for example, 10-remotelog.conf , and insert the following content: Where: The queue.type="linkedlist" setting enables a LinkedList in-memory queue, The queue.filename setting defines a disk storage. The backup files are created with the example_fwd prefix in the working directory specified by the preceding global workDirectory directive. The action.resumeRetryCount -1 setting prevents rsyslog from dropping messages when retrying to connect if server is not responding, The queue.saveOnShutdown="on" setting saves in-memory data if rsyslog shuts down. The last line forwards all received messages to the logging server. Port specification is optional. With this configuration, rsyslog sends messages to the server but keeps messages in memory if the remote server is not reachable. A file on disk is created only if rsyslog runs out of the configured memory queue space or needs to shut down, which benefits the system performance. Note Rsyslog processes configuration files /etc/rsyslog.d/ in the lexical order. Restart the rsyslog service. Verification To verify that the client system sends messages to the server, follow these steps: On the client system, send a test message: On the server system, view the /var/log/messages log, for example: Where hostname is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this case root . Additional resources rsyslogd(8) and rsyslog.conf(5) man pages on your system Documentation installed with the rsyslog-doc package in the /usr/share/doc/rsyslog/html/index.html file 14.5. Configuring TLS-encrypted remote logging By default, Rsyslog sends remote-logging communication in the plain text format. If your scenario requires to secure this communication channel, you can encrypt it using TLS. To use encrypted transport through TLS, configure both the server and the client. The server collects and analyzes the logs sent by one or more client systems. You can use either the ossl network stream driver (OpenSSL) or the gtls stream driver (GnuTLS). Note If you have a separate system with higher security, for example, a system that is not connected to any network or has stricter authorizations, use the separate system as the certifying authority (CA). You can customize your connection settings with stream drivers on the server side on the global , module , and input levels, and on the client side on the global and action levels. The more specific configuration overrides the more general configuration. This means, for example, that you can use ossl in global settings for most connections and gtls on the input and action settings only for specific connections. Prerequisites You have root access to both the client and server systems. The following packages are installed on the server and the client systems: The rsyslog package. For the ossl network stream driver, the rsyslog-openssl package. For the gtls network stream driver, the rsyslog-gnutls package. For generating certificates by using the certtool command, the gnutls-utils package. On your logging server, the following certificates are in the /etc/pki/ca-trust/source/anchors/ directory and your system configuration is updated by using the update-ca-trust command: ca-cert.pem - a CA certificate that can verify keys and certificates on logging servers and clients. server-cert.pem - a public key of the logging server. server-key.pem - a private key of the logging server. On your logging clients, the following certificates are in the /etc/pki/ca-trust/source/anchors/ directory and your system configuration is updated by using update-ca-trust : ca-cert.pem - a CA certificate that can verify keys and certificates on logging servers and clients. client-cert.pem - a public key of a client. client-key.pem - a private key of a client. If the server runs RHEL 9.2 or later and FIPS mode is enabled, clients must either support the Extended Master Secret (EMS) extension or use TLS 1.3. TLS 1.2 connections without EMS fail. For more information, see the Red Hat Knowledgebase solution TLS extension "Extended Master Secret" enforced . Procedure Configure the server for receiving encrypted logs from your client systems: Create a new file in the /etc/rsyslog.d/ directory named, for example, securelogser.conf . To encrypt the communication, the configuration file must contain paths to certificate files on your server, a selected authentication method, and a stream driver that supports TLS encryption. Add the following lines to the /etc/rsyslog.d/securelogser.conf file: # Set certificate files global( DefaultNetstreamDriverCAFile="/etc/pki/ca-trust/source/anchors/ca-cert.pem" DefaultNetstreamDriverCertFile="/etc/pki/ca-trust/source/anchors/server-cert.pem" DefaultNetstreamDriverKeyFile="/etc/pki/ca-trust/source/anchors/server-key.pem" ) # TCP listener module( load="imtcp" PermittedPeer=["client1.example.com", "client2.example.com"] StreamDriver.AuthMode="x509/name" StreamDriver.Mode="1" StreamDriver.Name="ossl" ) # Start up listener at port 514 input( type="imtcp" port="514" ) Note If you prefer the GnuTLS driver, use the StreamDriver.Name="gtls" configuration option. See the documentation installed with the rsyslog-doc package for more information about less strict authentication modes than x509/name . Optional: From Rsyslog version 8.2310, which is provided in RHEL 9.4, you can customize the connection configuration. To do so, replace the input section with the following: Replace <driver> with ossl or gtls depending on the driver you want to use. Replace <ca1> with the CA certificate, <server1-cert> with the certificate, and <server1-key> with the key of the customized connection. Save the changes to the /etc/rsyslog.d/securelogser.conf file. Verify the syntax of the /etc/rsyslog.conf file and any files in the /etc/rsyslog.d/ directory: Make sure the rsyslog service is running and enabled on the logging server: Restart the rsyslog service: Optional: If Rsyslog is not enabled, ensure the rsyslog service starts automatically after reboot: Configure clients for sending encrypted logs to the server: On a client system, create a new file in the /etc/rsyslog.d/ directory named, for example, securelogcli.conf . Add the following lines to the /etc/rsyslog.d/securelogcli.conf file: # Set certificate files global( DefaultNetstreamDriverCAFile="/etc/pki/ca-trust/source/anchors/ca-cert.pem" DefaultNetstreamDriverCertFile="/etc/pki/ca-trust/source/anchors/client-cert.pem" DefaultNetstreamDriverKeyFile="/etc/pki/ca-trust/source/anchors/client-key.pem" ) # Set up the action for all messages *.* action( type="omfwd" StreamDriver="ossl" StreamDriverMode="1" StreamDriverPermittedPeers="server.example.com" StreamDriverAuthMode="x509/name" target="server.example.com" port="514" protocol="tcp" ) Note If you prefer the GnuTLS driver, use the StreamDriver.Name="gtls" configuration option. Optional: From Rsyslog version 8.2310, which is provided in RHEL 9.4, you can customize the connection configuration. To do so, replace the action section with the following: Replace <driver> with ossl or gtls depending on the driver you want to use. Replace <ca1> with the CA certificate, <client1-cert> with the certificate, and <client1-key> with the key of the customized connection. Save the changes to the /etc/rsyslog.d/securelogcli.conf file. Verify the syntax of the /etc/rsyslog.conf file and other files in the /etc/rsyslog.d/ directory: Make sure the rsyslog service is running and enabled on the logging server: Restart the rsyslog service: Optional: If Rsyslog is not enabled, ensure the rsyslog service starts automatically after reboot: Verification To verify that the client system sends messages to the server, follow these steps: On the client system, send a test message: On the server system, view the /var/log/messages log, for example: Where <hostname> is the hostname of the client system. Note that the log contains the user name of the user that entered the logger command, in this case root . Additional resources certtool(1) , openssl(1) , update-ca-trust(8) , rsyslogd(8) , and rsyslog.conf(5) man pages on your system Documentation installed with the rsyslog-doc package at /usr/share/doc/rsyslog/html/index.html . Using the logging system role with TLS . 14.6. Configuring a server for receiving remote logging information over UDP The Rsyslog application enables you to configure a system to receive logging information from remote systems. To use remote logging through UDP, configure both the server and the client. The receiving server collects and analyzes the logs sent by one or more client systems. By default, rsyslog uses UDP on port 514 to receive log information from remote systems. Follow this procedure to configure a server for collecting and analyzing logs sent by one or more client systems over the UDP protocol. Prerequisites Rsyslog is installed on the server system. You are logged in as root on the server. The policycoreutils-python-utils package is installed for the optional step using the semanage command. The firewalld service is running. Procedure Optional: To use a different port for rsyslog traffic than the default port 514 : Add the syslogd_port_t SELinux type to the SELinux policy configuration, replacing portno with the port number you want rsyslog to use: Configure firewalld to allow incoming rsyslog traffic, replacing portno with the port number and zone with the zone you want rsyslog to use: Reload the firewall rules: Create a new .conf file in the /etc/rsyslog.d/ directory, for example, remotelogserv.conf , and insert the following content: # Define templates before the rules that use them # Per-Host templates for remote systems template(name="TmplAuthpriv" type="list") { constant(value="/var/log/remote/auth/") property(name="hostname") constant(value="/") property(name="programname" SecurePath="replace") constant(value=".log") } template(name="TmplMsg" type="list") { constant(value="/var/log/remote/msg/") property(name="hostname") constant(value="/") property(name="programname" SecurePath="replace") constant(value=".log") } # Provides UDP syslog reception module(load="imudp") # This ruleset processes remote messages ruleset(name="remote1"){ authpriv.* action(type="omfile" DynaFile="TmplAuthpriv") *.info;mail.none;authpriv.none;cron.none action(type="omfile" DynaFile="TmplMsg") } input(type="imudp" port="514" ruleset="remote1") Where 514 is the port number rsyslog uses by default. You can specify a different port instead. Verify the syntax of the /etc/rsyslog.conf file and all .conf files in the /etc/rsyslog.d/ directory: Restart the rsyslog service. Optional: If rsyslog is not enabled, ensure the rsyslog service starts automatically after reboot: Additional resources rsyslogd(8) , rsyslog.conf(5) , semanage(8) , and firewall-cmd(1) man pages on your system Documentation installed with the rsyslog-doc package in the /usr/share/doc/rsyslog/html/index.html file 14.7. Configuring remote logging to a server over UDP You can configure a system for forwarding log messages to a server over the UDP protocol. The omfwd plug-in provides forwarding over UDP or TCP. The default protocol is UDP. Because the plug-in is built in, you do not have to load it. Prerequisites The rsyslog package is installed on the client systems that should report to the server. You have configured the server for remote logging as described in Configuring a server for receiving remote logging information over UDP . Procedure Create a new .conf file in the /etc/rsyslog.d/ directory, for example, 10-remotelogcli.conf , and insert the following content: Where: The queue.type="linkedlist" setting enables a LinkedList in-memory queue. The queue.filename setting defines a disk storage. The backup files are created with the example_fwd prefix in the working directory specified by the preceding global workDirectory directive. The action.resumeRetryCount -1 setting prevents rsyslog from dropping messages when retrying to connect if the server is not responding. The enabled queue.saveOnShutdown="on" setting saves in-memory data if rsyslog shuts down. The portno value is the port number you want rsyslog to use. The default value is 514 . The last line forwards all received messages to the logging server, port specification is optional. With this configuration, rsyslog sends messages to the server but keeps messages in memory if the remote server is not reachable. A file on disk is created only if rsyslog runs out of the configured memory queue space or needs to shut down, which benefits the system performance. Note Rsyslog processes configuration files /etc/rsyslog.d/ in the lexical order. Restart the rsyslog service. Optional: If rsyslog is not enabled, ensure the rsyslog service starts automatically after reboot: Verification To verify that the client system sends messages to the server, follow these steps: On the client system, send a test message: On the server system, view the /var/log/remote/msg/ hostname /root.log log, for example: Where hostname is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this case root . Additional resources rsyslogd(8) and rsyslog.conf(5) man pages on your system Documentation installed with the rsyslog-doc package at /usr/share/doc/rsyslog/html/index.html 14.8. Load balancing helper in Rsyslog When used in a cluster, you can improve Rsyslog load balancing by modifying the RebindInterval setting. RebindInterval specifies an interval at which the current connection is broken and is re-established. This setting applies to TCP, UDP, and RELP traffic. The load balancers perceive it as a new connection and forward the messages to another physical target system. RebindInterval is helpful in scenarios when a target system has changed its IP address. The Rsyslog application caches the IP address when the connection is established, therefore, the messages are sent to the same server. If the IP address changes, the UDP packets are lost until the Rsyslog service restarts. Re-establishing the connection ensures the IP is resolved by DNS again. Example usage of RebindInterval for TCP, UDP, and RELP traffic 14.9. Configuring reliable remote logging With the Reliable Event Logging Protocol (RELP), you can send and receive syslog messages over TCP with a much reduced risk of message loss. RELP provides reliable delivery of event messages, which makes it useful in environments where message loss is not acceptable. To use RELP, configure the imrelp input module, which runs on the server and receives the logs, and the omrelp output module, which runs on the client and sends logs to the logging server. Prerequisites You have installed the rsyslog , librelp , and rsyslog-relp packages on the server and the client systems. The specified port is permitted in SELinux and open in the firewall. Procedure Configure the client system for reliable remote logging: On the client system, create a new .conf file in the /etc/rsyslog.d/ directory named, for example, relpclient.conf , and insert the following content: Where: target_IP is the IP address of the logging server. target_port is the port of the logging server. Save the changes to the /etc/rsyslog.d/relpclient.conf file. Restart the rsyslog service. Optional: If rsyslog is not enabled, ensure the rsyslog service starts automatically after reboot: Configure the server system for reliable remote logging: On the server system, create a new .conf file in the /etc/rsyslog.d/ directory named, for example, relpserv.conf , and insert the following content: Where: log_path specifies the path for storing messages. target_port is the port of the logging server. Use the same value as in the client configuration file. Save the changes to the /etc/rsyslog.d/relpserv.conf file. Restart the rsyslog service. Optional: If rsyslog is not enabled, ensure the rsyslog service starts automatically after reboot: Verification To verify that the client system sends messages to the server, follow these steps: On the client system, send a test message: On the server system, view the log at the specified log_path , for example: Where hostname is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this case root . Additional resources rsyslogd(8) and rsyslog.conf(5) man pages on your system Documentation installed with the rsyslog-doc package in the /usr/share/doc/rsyslog/html/index.html file 14.10. Supported Rsyslog modules To expand the functionality of the Rsyslog application, you can use specific modules. Modules provide additional inputs (Input Modules), outputs (Output Modules), and other functionalities. A module can also provide additional configuration directives that become available after you load the module. You can list the input and output modules installed on your system by entering the following command: You can view the list of all available rsyslog modules in the /usr/share/doc/rsyslog/html/configuration/modules/idx_output.html file after you install the rsyslog-doc package. 14.11. Configuring the netconsole service to log kernel messages to a remote host When logging to disk or using a serial console is not possible, you can use the netconsole kernel module and the same-named service to log kernel messages over a network to a remote rsyslog service. Prerequisites A system log service, such as rsyslog is installed on the remote host. The remote system log service is configured to receive incoming log entries from this host. Procedure Install the netconsole-service package: Edit the /etc/sysconfig/netconsole file and set the SYSLOGADDR parameter to the IP address of the remote host: Enable and start the netconsole service: Verification Display the /var/log/messages file on the remote system log server. 14.12. Additional resources Documentation installed with the rsyslog-doc package in the /usr/share/doc/rsyslog/html/index.html file rsyslog.conf(5) and rsyslogd(8) man pages on your system Configuring system logging without journald or with minimized journald usage Knowledgebase article Negative effects of the RHEL default logging setup on performance and their mitigations Knowledgebase article The Using the Logging system role chapter | [
"dnf install rsyslog-doc",
"firefox /usr/share/doc/rsyslog/html/index.html &",
"semanage port -a -t syslogd_port_t -p tcp 30514",
"firewall-cmd --zone= <zone-name> --permanent --add-port=30514/tcp success firewall-cmd --reload",
"Define templates before the rules that use them Per-Host templates for remote systems template(name=\"TmplAuthpriv\" type=\"list\") { constant(value=\"/var/log/remote/auth/\") property(name=\"hostname\") constant(value=\"/\") property(name=\"programname\" SecurePath=\"replace\") constant(value=\".log\") } template(name=\"TmplMsg\" type=\"list\") { constant(value=\"/var/log/remote/msg/\") property(name=\"hostname\") constant(value=\"/\") property(name=\"programname\" SecurePath=\"replace\") constant(value=\".log\") } Provides TCP syslog reception module(load=\"imtcp\") Adding this ruleset to process remote messages ruleset(name=\"remote1\"){ authpriv.* action(type=\"omfile\" DynaFile=\"TmplAuthpriv\") *.info;mail.none;authpriv.none;cron.none action(type=\"omfile\" DynaFile=\"TmplMsg\") } input(type=\"imtcp\" port=\"30514\" ruleset=\"remote1\")",
"rsyslogd -N 1 rsyslogd: version 8.1911.0-2.el8, config validation run rsyslogd: End of config validation run. Bye.",
"systemctl status rsyslog",
"systemctl restart rsyslog",
"systemctl enable rsyslog",
"*.* action(type=\"omfwd\" queue.type=\"linkedlist\" queue.filename=\"example_fwd\" action.resumeRetryCount=\"-1\" queue.saveOnShutdown=\"on\" target=\"example.com\" port=\"30514\" protocol=\"tcp\" )",
"systemctl restart rsyslog",
"logger test",
"cat /var/log/remote/msg/ hostname /root.log Feb 25 03:53:17 hostname root[6064]: test",
"Set certificate files global( DefaultNetstreamDriverCAFile=\"/etc/pki/ca-trust/source/anchors/ca-cert.pem\" DefaultNetstreamDriverCertFile=\"/etc/pki/ca-trust/source/anchors/server-cert.pem\" DefaultNetstreamDriverKeyFile=\"/etc/pki/ca-trust/source/anchors/server-key.pem\" ) TCP listener module( load=\"imtcp\" PermittedPeer=[\"client1.example.com\", \"client2.example.com\"] StreamDriver.AuthMode=\"x509/name\" StreamDriver.Mode=\"1\" StreamDriver.Name=\"ossl\" ) Start up listener at port 514 input( type=\"imtcp\" port=\"514\" )",
"input( type=\"imtcp\" Port=\"50515\" StreamDriver.Name=\" <driver> \" streamdriver.CAFile=\"/etc/rsyslog.d/ <ca1> .pem\" streamdriver.CertFile=\"/etc/rsyslog.d/ <server1-cert> .pem\" streamdriver.KeyFile=\"/etc/rsyslog.d/ <server1-key> .pem\" )",
"rsyslogd -N 1 rsyslogd: version 8.1911.0-2.el8, config validation run (level 1) rsyslogd: End of config validation run. Bye.",
"systemctl status rsyslog",
"systemctl restart rsyslog",
"systemctl enable rsyslog",
"Set certificate files global( DefaultNetstreamDriverCAFile=\"/etc/pki/ca-trust/source/anchors/ca-cert.pem\" DefaultNetstreamDriverCertFile=\"/etc/pki/ca-trust/source/anchors/client-cert.pem\" DefaultNetstreamDriverKeyFile=\"/etc/pki/ca-trust/source/anchors/client-key.pem\" ) Set up the action for all messages *.* action( type=\"omfwd\" StreamDriver=\"ossl\" StreamDriverMode=\"1\" StreamDriverPermittedPeers=\"server.example.com\" StreamDriverAuthMode=\"x509/name\" target=\"server.example.com\" port=\"514\" protocol=\"tcp\" )",
"local1.* action( type=\"omfwd\" StreamDriver=\"<driver>\" StreamDriverMode=\"1\" StreamDriverAuthMode=\"x509/certvalid\" streamDriver.CAFile=\"/etc/rsyslog.d/<ca1>.pem\" streamDriver.CertFile=\"/etc/rsyslog.d/<client1-cert>.pem\" streamDriver.KeyFile=\"/etc/rsyslog.d/<client1-key>.pem\" target=\"server.example.com\" port=\"514\" protocol=\"tcp\" )",
"rsyslogd -N 1 rsyslogd: version 8.1911.0-2.el8, config validation run (level 1) rsyslogd: End of config validation run. Bye.",
"systemctl status rsyslog",
"systemctl restart rsyslog",
"systemctl enable rsyslog",
"logger test",
"cat /var/log/remote/msg/ <hostname> /root.log Feb 25 03:53:17 <hostname> root[6064]: test",
"semanage port -a -t syslogd_port_t -p udp portno",
"firewall-cmd --zone= zone --permanent --add-port= portno /udp success firewall-cmd --reload",
"firewall-cmd --reload",
"Define templates before the rules that use them Per-Host templates for remote systems template(name=\"TmplAuthpriv\" type=\"list\") { constant(value=\"/var/log/remote/auth/\") property(name=\"hostname\") constant(value=\"/\") property(name=\"programname\" SecurePath=\"replace\") constant(value=\".log\") } template(name=\"TmplMsg\" type=\"list\") { constant(value=\"/var/log/remote/msg/\") property(name=\"hostname\") constant(value=\"/\") property(name=\"programname\" SecurePath=\"replace\") constant(value=\".log\") } Provides UDP syslog reception module(load=\"imudp\") This ruleset processes remote messages ruleset(name=\"remote1\"){ authpriv.* action(type=\"omfile\" DynaFile=\"TmplAuthpriv\") *.info;mail.none;authpriv.none;cron.none action(type=\"omfile\" DynaFile=\"TmplMsg\") } input(type=\"imudp\" port=\"514\" ruleset=\"remote1\")",
"rsyslogd -N 1 rsyslogd: version 8.1911.0-2.el8, config validation run",
"systemctl restart rsyslog",
"systemctl enable rsyslog",
"*.* action(type=\"omfwd\" queue.type=\"linkedlist\" queue.filename=\" example_fwd \" action.resumeRetryCount=\"-1\" queue.saveOnShutdown=\"on\" target=\" example.com \" port=\" portno \" protocol=\"udp\" )",
"systemctl restart rsyslog",
"systemctl enable rsyslog",
"logger test",
"cat /var/log/remote/msg/ hostname /root.log Feb 25 03:53:17 hostname root[6064]: test",
"action(type=\"omfwd\" protocol=\"tcp\" RebindInterval=\"250\" target=\" example.com \" port=\"514\" ...) action(type=\"omfwd\" protocol=\"udp\" RebindInterval=\"250\" target=\" example.com \" port=\"514\" ...) action(type=\"omrelp\" RebindInterval=\"250\" target=\" example.com \" port=\"6514\" ...)",
"module(load=\"omrelp\") *.* action(type=\"omrelp\" target=\"_target_IP_\" port=\"_target_port_\")",
"systemctl restart rsyslog",
"systemctl enable rsyslog",
"ruleset(name=\"relp\"){ *.* action(type=\"omfile\" file=\"_log_path_\") } module(load=\"imrelp\") input(type=\"imrelp\" port=\"_target_port_\" ruleset=\"relp\")",
"systemctl restart rsyslog",
"systemctl enable rsyslog",
"logger test",
"cat /var/log/remote/msg/hostname/root.log Feb 25 03:53:17 hostname root[6064]: test",
"ls /usr/lib64/rsyslog/{i,o}m *",
"dnf install netconsole-service",
"SYSLOGADDR= 192.0.2.1",
"systemctl enable --now netconsole"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/security_hardening/assembly_configuring-a-remote-logging-solution_security-hardening |
20.21. Creating a Guest Virtual Machine from a Configuration File | 20.21. Creating a Guest Virtual Machine from a Configuration File Guest virtual machines can be created from XML configuration files. You can copy existing XML from previously created guest virtual machines or use the virsh dumpxml command. Example 20.49. How to create a guest virtual machine from an XML file The following example creates a new virtual machine from the existing guest1.xml configuration file. You need to have this file before beginning. You can retrieve the file using the virsh dumpxml command. See Example 20.48, "How to retrieve the XML file for a guest virtual machine" for instructions. # virsh create guest1.xml | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-domain_commands-creating_a_guest_virtual_machine_from_a_configuration_file |
Chapter 1. Autoscale APIs | Chapter 1. Autoscale APIs 1.1. ClusterAutoscaler [autoscaling.openshift.io/v1] Description ClusterAutoscaler is the Schema for the clusterautoscalers API Type object 1.2. MachineAutoscaler [autoscaling.openshift.io/v1beta1] Description MachineAutoscaler is the Schema for the machineautoscalers API Type object 1.3. HorizontalPodAutoscaler [autoscaling/v2] Description HorizontalPodAutoscaler is the configuration for a horizontal pod autoscaler, which automatically manages the replica count of any resource implementing the scale subresource based on the metrics specified. Type object 1.4. Scale [autoscaling/v1] Description Scale represents a scaling request for a resource. Type object | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/autoscale_apis/autoscale-apis |
3.5. Displaying Cluster Status | 3.5. Displaying Cluster Status The following command displays the current status of the cluster and the cluster resources. You can display a subset of information about the current status of the cluster with the following commands. The following command displays the status of the cluster, but not the cluster resources. The following command displays the status of the cluster resources. | [
"pcs status",
"pcs cluster status",
"pcs status resources"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-clusterstat-haar |
Chapter 1. Interactively selecting a system-wide Red Hat build of OpenJDK version on RHEL | Chapter 1. Interactively selecting a system-wide Red Hat build of OpenJDK version on RHEL If you have multiple versions of Red Hat build of OpenJDK installed on RHEL, you can interactively select the default Red Hat build of OpenJDK version to use system-wide. Note If you do not have root privileges, you can select a Red Hat build of OpenJDK version by configuring the JAVA_HOME environment variable. Prerequisites You must have root privileges on the system. Multiple versions of Red Hat build of OpenJDK were installed using the yum package manager. Procedure View the Red Hat build of OpenJDK versions installed on the system. USD yum list installed "java*" A list of installed Java packages appears. Display the Red Hat build of OpenJDK versions that can be used for a specific java command and select the one to use: The current system-wide Red Hat build of OpenJDK version is marked with an asterisk. The current Red Hat build of OpenJDK version for the specified java command is marked with a plus sign. Press Enter to keep the current selection or enter the Selection number of the Red Hat build of OpenJDK version you want to select followed by the Enter key. The default Red Hat build of OpenJDK version for the system is the selected version. Verify that the chosen binary is selected. Note This procedure configures the java command. Then javac command can be set up in a similar way, but it operates independently. If you have Red Hat build of OpenJDK installed, alternatives provides more possible selections. In particular, the javac master alternative switches many binaries provided by the -devel sub-package. Even if you have Red Hat build of OpenJDK installed, java (and other JRE masters) and javac (and other Red Hat build of OpenJDK masters) still operate separately, so you can have different selections for JRE and JDK. The alternatives --config java command affects the jre and its associated slaves. If you want to change Red Hat build of OpenJDK, use the javac alternatives command. The --config javac utility configures the SDK and related slaves. To see all possible masters, use alternatives --list and check all of the java , javac , jre , and sdk masters. | [
"Installed Packages java-1.8.0-openjdk.x86_64 1:1.8.0.302.b08-0.el8_4 @rhel-8-appstream-rpms java-11-openjdk.x86_64 1:11.0.12.0.7-0.el8_4 @rhel-8-appstream-rpms java-11-openjdk-headless.x86_64 1:11.0.12.0.7-0.el8_4 @rhel-8-appstream-rpms java-17-openjdk.x86_64 1:17.0.0.0.35-4.el8 @rhel-8-appstream-rpms java-17-openjdk-headless.x86_64 1:17.0.0.0.35-4.el8 @rhel-8-appstream-rpms",
"sudo alternatives --config java There are 3 programs which provide 'java'. Selection Command ----------------------------------------------- 1 java-11-openjdk.x86_64 (/usr/lib/jvm/java-11-openjdk-11.0.12.0.7-0.el8_4.x86_64/bin/java) * 2 java-1.8.0-openjdk.x86_64 (/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.302.b08-0.el8_4.x86_64/jre/bin/java) + 3 java-17-openjdk.x86_64 (/usr/lib/jvm/java-17-openjdk-17.0.0.0.35-4.el8.x86_64/bin/java) Enter to keep the current selection[+], or type selection number: 1",
"java -version openjdk version \"17\" 2021-09-14 OpenJDK Runtime Environment 21.9 (build 17+35) OpenJDK 64-Bit Server VM 21.9 (build 17+35, mixed mode, sharing)"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/configuring_red_hat_build_of_openjdk_21_on_rhel/interactively-selecting-systemwide-openjdk-version-on-rhel |
Chapter 8. KVM Guest Timing Management | Chapter 8. KVM Guest Timing Management Virtualization involves several challenges for time keeping in guest virtual machines. Interrupts cannot always be delivered simultaneously and instantaneously to all guest virtual machines. This is because interrupts in virtual machines are not true interrupts. Instead, they are injected into the guest virtual machine by the host machine. The host may be running another guest virtual machine, or a different process. Therefore, the precise timing typically required by interrupts may not always be possible. Guest virtual machines without accurate time keeping may experience issues with network applications and processes, as session validity, migration, and other network activities rely on timestamps to remain correct. KVM avoids these issues by providing guest virtual machines with a paravirtualized clock ( kvm-clock ). However, it is still important to test timing before attempting activities that may be affected by time keeping inaccuracies, such as guest migration. Important To avoid the problems described above, the Network Time Protocol (NTP) should be configured on the host and the guest virtual machines. On guests using Red Hat Enterprise Linux 6 and earlier, NTP is implemented by the ntpd service. For more information, see the Red Hat Enterprise 6 Deployment Guide . On systems using Red Hat Enterprise Linux 7, NTP time synchronization service can be provided by ntpd or by the chronyd service. Note that Chrony has some advantages on virtual machines. For more information, see the Configuring NTP Using the chrony Suite and Configuring NTP Using ntpd sections in the Red Hat Enterprise Linux 7 System Administrator's Guide. The mechanics of guest virtual machine time synchronization By default, the guest synchronizes its time with the hypervisor as follows: When the guest system boots, the guest reads the time from the emulated Real Time Clock (RTC). When the NTP protocol is initiated, it automatically synchronizes the guest clock. Afterwards, during normal guest operation, NTP performs clock adjustments in the guest. When a guest is resumed after a pause or a restoration process, a command to synchronize the guest clock to a specified value should be issued by the management software (such as virt-manager ). This synchronization works only if the QEMU guest agent is installed in the guest and supports the feature. The value to which the guest clock synchronizes is usually the host clock value. Constant Time Stamp Counter (TSC) Modern Intel and AMD CPUs provide a constant Time Stamp Counter (TSC). The count frequency of the constant TSC does not vary when the CPU core itself changes frequency, for example to comply with a power-saving policy. A CPU with a constant TSC frequency is necessary in order to use the TSC as a clock source for KVM guests. Your CPU has a constant Time Stamp Counter if the constant_tsc flag is present. To determine if your CPU has the constant_tsc flag enter the following command: If any output is given, your CPU has the constant_tsc bit. If no output is given, follow the instructions below. Configuring Hosts without a Constant Time Stamp Counter Systems without a constant TSC frequency cannot use the TSC as a clock source for virtual machines, and require additional configuration. Power management features interfere with accurate time keeping and must be disabled for guest virtual machines to accurately keep time with KVM. Important These instructions are for AMD revision F CPUs only. If the CPU lacks the constant_tsc bit, disable all power management features . Each system has several timers it uses to keep time. The TSC is not stable on the host, which is sometimes caused by cpufreq changes, deep C state, or migration to a host with a faster TSC. Deep C sleep states can stop the TSC. To prevent the kernel using deep C states append processor.max_cstate=1 to the kernel boot. To make this change persistent, edit values of the GRUB_CMDLINE_LINUX key in the /etc/default/grub file. For example. if you want to enable emergency mode for each boot, edit the entry as follows: Note that you can specify multiple parameters for the GRUB_CMDLINE_LINUX key, similarly to adding the parameters in the GRUB 2 boot menu. To disable cpufreq (only necessary on hosts without the constant_tsc ), install kernel-tools and enable the cpupower.service ( systemctl enable cpupower.service ). If you want to disable this service every time the guest virtual machine boots, change the configuration file in /etc/sysconfig/cpupower and change the CPUPOWER_START_OPTS and CPUPOWER_STOP_OPTS. Valid limits can be found in the /sys/devices/system/cpu/ cpuid /cpufreq/scaling_available_governors files. For more information on this package or on power management and governors, see the Red Hat Enterprise Linux 7 Power Management Guide . 8.1. Host-wide Time Synchronization Virtual network devices in KVM guests do not support hardware timestamping, which means it is difficult to synchronize the clocks of guests that use a network protocol like NTP or PTP with better accuracy than tens of microseconds. When a more accurate synchronization of the guests is required, it is recommended to synchronize the clock of the host using NTP or PTP with hardware timestamping, and to synchronize the guests to the host directly. Red Hat Enterprise Linux 7.5 and later provide a virtual PTP hardware clock (PHC), which enables the guests to synchronize to the host with a sub-microsecond accuracy. Important Note that for PHC to work properly, both the host and the guest need be using RHEL 7.5 or later as the operating system (OS). To enable the PHC device, do the following on the guest OS: Set the ptp_kvm module to load after reboot. Add the /dev/ptp0 clock as a reference to the chrony configuration: Restart the chrony daemon: To verify the host-guest time synchronization has been configured correctly, use the chronyc sources command on a guest. The output should look similar to the following: | [
"cat /proc/cpuinfo | grep constant_tsc",
"GRUB_CMDLINE_LINUX=\"emergency\"",
"echo ptp_kvm > /etc/modules-load.d/ptp_kvm.conf",
"echo \"refclock PHC /dev/ptp0 poll 2\" >> /etc/chrony.conf",
"systemctl restart chronyd",
"chronyc sources 210 Number of sources = 1 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== #* PHC0 0 2 377 4 -6ns[ -6ns] +/- 726ns"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/chap-kvm_guest_timing_management |
Chapter 2. Installing the Virtualization Packages | Chapter 2. Installing the Virtualization Packages To use virtualization, Red Hat virtualization packages must be installed on your computer. Virtualization packages can be installed when installing Red Hat Enterprise Linux or after installation using the yum command and the Subscription Manager application. The KVM hypervisor uses the default Red Hat Enterprise Linux kernel with the kvm kernel module. 2.1. Installing Virtualization Packages During a Red Hat Enterprise Linux Installation This section provides information about installing virtualization packages while installing Red Hat Enterprise Linux. Note For detailed information about installing Red Hat Enterprise Linux, see the Red Hat Enterprise Linux 7 Installation Guide . Important The Anaconda interface only offers the option to install Red Hat virtualization packages during the installation of Red Hat Enterprise Linux Server. When installing a Red Hat Enterprise Linux Workstation, the Red Hat virtualization packages can only be installed after the workstation installation is complete. See Section 2.2, "Installing Virtualization Packages on an Existing Red Hat Enterprise Linux System" Procedure 2.1. Installing virtualization packages Select software Follow the installation procedure until the Installation Summary screen. Figure 2.1. The Installation Summary screen In the Installation Summary screen, click Software Selection . The Software Selection screen opens. Select the server type and package groups You can install Red Hat Enterprise Linux 7 with only the basic virtualization packages or with packages that allow management of guests through a graphical user interface. Do one of the following: Install a minimal virtualization host Select the Virtualization Host radio button in the Base Environment pane and the Virtualization Platform check box in the Add-Ons for Selected Environment pane. This installs a basic virtualization environment which can be run with virsh or remotely over the network. Figure 2.2. Virtualization Host selected in the Software Selection screen Install a virtualization host with a graphical user interface Select the Server with GUI radio button in the Base Environment pane and the Virtualization Client , Virtualization Hypervisor , and Virtualization Tools check boxes in the Add-Ons for Selected Environment pane. This installs a virtualization environment along with graphical tools for installing and managing guest virtual machines. Figure 2.3. Server with GUI selected in the software selection screen Finalize installation Click Done and continue with the installation. Important You need a valid Red Hat Enterprise Linux subscription to receive updates for the virtualization packages. 2.1.1. Installing KVM Packages with Kickstart Files To use a Kickstart file to install Red Hat Enterprise Linux with the virtualization packages, append the following package groups in the %packages section of your Kickstart file: For more information about installing with Kickstart files, see the Red Hat Enterprise Linux 7 Installation Guide . | [
"@virtualization-hypervisor @virtualization-client @virtualization-platform @virtualization-tools"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/chap-installing_the_virtualization_packages |
Chapter 13. Managing systemd | Chapter 13. Managing systemd As a system administrator, you can manage critical aspects of your system with systemd . Serving as a system and service manager for Linux operating systems, systemd software suite provides tools and services for controlling, reporting, and system initialization. Key features of systemd include: Parallel start of system services during boot On-demand activation of daemons Dependency-based service control logic The basic object that systemd manages is a systemd unit , a representation of system resources and services. A systemd unit consists of a name, type and a configuration file that defines and manages a particular task. You can use unit files to configure system behavior. See the following examples of various systemd unit types: Service Controls and manages individual system services. Target Represents a group of units that define system states. Device Manages hardware devices and their availability. Mount Handles file system mounting. Timer Schedules tasks to run at specific intervals. 13.1. Systemd unit files locations You can find the unit configuration files in one of the following directories: Table 13.1. systemd unit files locations Directory Description /usr/lib/systemd/system/ systemd unit files distributed with installed RPM packages. /run/systemd/system/ systemd unit files created at run time. This directory takes precedence over the directory with installed service unit files. /etc/systemd/system/ systemd unit files created by using the systemctl enable command as well as unit files added for extending a service. This directory takes precedence over the directory with runtime unit files. The default configuration of systemd is defined during the compilation and you can find the configuration in the /etc/systemd/system.conf file. By editing this file, you can modify the default configuration by overriding values for systemd units globally. For example, to override the default value of the timeout limit, which is set to 90 seconds, use the DefaultTimeoutStartSec parameter to input the required value in seconds. 13.2. Managing system services with systemctl As a system administrator, you can manage system services by using the systemctl utility. You can perform various tasks, such as starting, stopping, restarting running services, enabling and disabling services to start at boot, listing available services, and displaying system services statuses. 13.2.1. Listing system services You can list all currently loaded service units and display the status of all available service units. Procedure Use the systemctl command to perform any of the following tasks: List all currently loaded service units: By default, the systemctl list-units command displays only active units. For each service unit file, the command provides an overview of the following parameters: UNIT The full name of the service unit LOAD The load state of the configuration file ACTIVE or SUB The current high-level and low-level unit file activation state DESCRIPTION A short description of the unit's purpose and functionality List all loaded units regardless of their state , by using the following command with the --all or -a command line option: List the status ( enabled or disabled ) of all available service units: For each service unit, this command displays: UNIT FILE The full name of the service unit STATE The information whether the service unit is enabled or disabled to start automatically during boot Additional resources Displaying system service status 13.2.2. Displaying system service status You can inspect any service unit to get detailed information and verify the state of the service, whether it is enabled to start during boot or currently running. You can also view services that are ordered to start after or before a particular service unit. Procedure Display detailed information about a service unit that corresponds to a system service: Replace <name> with the name of the service unit you want to inspect (for example, gdm ). This command displays the following information: The name of the selected service unit followed by a short description One or more fields described in Available service unit information The execution of the service unit: if the unit is executed by the root user The most recent log entries Table 13.2. Available service unit information Field Description Loaded Information whether the service unit has been loaded, the absolute path to the unit file, and a note whether the unit is enabled to start during boot. Active Information whether the service unit is running followed by a time stamp. Main PID The process ID and the name of the corresponding system service. Status Additional information about the corresponding system service. Process Additional information about related processes. CGroup Additional information about related control groups ( cgroups ). Verify that a particular service unit is running: Determine whether a particular service unit is enabled to start during boot: Note Both systemctl is-active and systemctl is-enabled commands return an exit status of 0 if the specified service unit is running or enabled. Check what services systemd orders to start before the specified service unit For example, to view the list of services ordered to start before gdm , enter: Check what services systemd orders to start after the specified service unit: For example, to view the list of services systemd orders to start after gdm , enter: Additional resources Listing system services 13.2.3. Starting and stopping a systemd unit You can start system service in the current session by using the systemctl start command. Prerequisites You have the Root access. Procedure Start a system service in the current session: Replace <systemd_unit> with the name of the service unit you want to start (for example, httpd.service ). Note In systemd , positive and negative dependencies between services exist. Starting a particular service may require starting one or more other services ( positive dependency ) or stopping one or more services ( negative dependency ). When you attempt to start a new service, systemd resolves all dependencies automatically, without explicit notification to the user. This means that if you are already running a service, and you attempt to start another service with a negative dependency, the first service is automatically stopped. For example, if you are running the sendmail service, and you attempt to start the postfix service, systemd first automatically stops sendmail , because these two services are conflicting and cannot run on the same port. Additional resources systemctl(1) man page on your system Enabling a system service to start at boot Displaying system service status 13.2.4. Stopping a system service If you want to stop a system service in the current session, use the systemctl stop command. Prerequisites Root access Procedure Stop a system service: Replace <name> with the name of the service unit you want to stop (for example, bluetooth ). Additional resources systemctl(1) man page on your system Disabling a system service to start at boot Displaying system service status 13.2.5. Restarting and Reload a system service You can restart system service in the current session using the restart command to perform the following actions: Stop the selected service unit in the current session and immediately start it again. Restart a service unit only if the corresponding service is already running. Reload configuration of a system service without interrupting its execution. Prerequisites You have the Root access. Procedure Restart a system service: Replace <name> with the name of the service unit you want to restart (for example, httpd ). If the selected service unit is not running, this command starts it. Restart a service unit only if the corresponding service is already running: Reload the configuration without interrupting service execution: Note System services that do not support this feature, ignore this command. To restart such services, use the reload-or-restart and reload-or-try-restart commands instead. Additional resources systemctl man page on your system Displaying system service status 13.2.6. Enabling a system service to start at boot You can enable a service to start automatically at boot, these changes apply with the reboot. Prerequisites You have Root access. Procedure Verify whether the unit is masked: If the unit is masked, unmask it first: Enable a service to start at boot time: Replace <systemd_unit> with the name of the service unit you want to enable (for example, httpd ). Optionally, pass the --now option to the command to also start the unit right now. Additional resources systemctl (1) man page on your system Displaying system service status Starting a system service 13.2.7. Disabling a system service to start at boot You can prevent a service unit from starting automatically at boot time. If you disable a service, it will not start at boot, but you can start it manually. You can also mask a service, so that it cannot be started manually. Masking is a way of disabling a service that makes the service permanently unusable until it is unmasked again. Prerequisites You have Root access. Procedure Disable a service to start at boot: Replace <name> with the name of the service unit you want to disable (for example, bluetooth ). Optionally, pass the --now command to also stop the service if it is currently running. Optional: To prevent that the unit can be accidentally started by an administrator or as a dependency of other units, mask the service: Additional resources systemctl (1) man page on your system Displaying system service status Stopping a system service 13.3. Booting into a target system state As a system administrator, you can control the boot process of your system, and define the state you want your system to boot into. This is called a systemd target, and it is a set of systemd units that your system starts to reach a certain level of functionality. While working with systemd targets, you can view the default target, select a target at runtime, change the default boot target, boot into emergency or rescue target. 13.3.1. Target unit files Targets in systemd are groups of related units that act as synchronization points during the start of your system. Target unit files, which end with the .target file extension, represent the systemd targets. The purpose of target units is to group together various systemd units through a chain of dependencies. Consider the following example: Similarly, the multi-user.target unit starts other essential system services such as NetworkManager ( NetworkManager.service ) or D-Bus ( dbus.service ) and activates another target unit named basic.target . You can set the following systemd targets as default or current targets: Table 13.3. Common systemd targets rescue unit target that pulls in the base system and spawns a rescue shell multi-user unit target for setting up a multi-user system graphical unit target for setting up a graphical login screen emergency unit target that starts an emergency shell on the main console Additional resources systemd.special(7) and systemd.target(5) man pages on your system 13.3.2. Changing the default target to boot into The default.target symbolic link refers to the systemd target that the system should boot into. When the system starts, systemd resolves this link and boots into the defined target. You can find the currently selected default target unit in the /etc/systemd/system/default.target file. Each target represents a certain level of functionality and is used for grouping other units. Additionally, target units serve as synchronization points during boot. You can change the default target your system boots into. When you set a default target unit, the current target remains unchanged until the reboot. Prerequisites You have Root access. Procedure Determine the current default target unit systemd uses to start the system: List the currently loaded targets: Configure the system to use a different target unit by default: Replace <name> with the name of the target unit you want to use by default. Verify the default target unit: Optional: Switch to the new default target: Alternatively, reboot the system. Additional resources systemctl(1) , systemd.special(7) , and bootup(7) man pages on your system 13.3.3. Changing the current target On a running system, you can change the target unit in the current boot without reboot. If you switch to a different target, systemd starts all services and their dependencies that this target requires, and stops all services that the new target does not enable. Manually switching to a different target is only a temporary operation. When you reboot the host, systemd boots again into the default target. Procedure Optional: Display the list of targets you can select: Note You can only isolate targets that have the AllowIsolate=yes option set in the unit files. Change to a different target unit in the current boot: Replace <name> with the name of the target unit you want to use in the current boot. This command starts the target unit named multi-user and all dependent units, and immediately stops all other unit. Additional resources systemctl(1) man page on your system 13.3.4. Booting to rescue mode You can boot to the rescue mode that provides a single-user environment for troubleshooting or repair if the system cannot get to a later target, and the regular booting process fails. In rescue mode, the system attempts to mount all local file systems and start certain important system services, but it does not activate network interfaces. Prerequisites Root access Procedure To enter the rescue mode, change the current target in the current session: Note This command is similar to systemctl isolate rescue.target , but it also sends an informative message to all users that are currently logged into the system. To prevent systemd from sending a message, enter the following command with the --no-wall command-line option: Troubleshooting If your system is not able to enter the rescue mode, you can boot to emergency mode , which provides the most minimal environment possible. In emergency mode, the system mounts the root file system only for reading, does not attempt to mount any other local file systems, does not activate network interfaces, and only starts a few essential services. 13.3.5. Troubleshooting the boot process As a system administrator, you can select a non-default target at boot time to troubleshoot the boot process. Changing the target at boot time affects only a single boot. You can boot to emergency mode , which provides the most minimal environment possible. Procedure Reboot the system, and interrupt the boot loader menu countdown by pressing any key except the Enter key, which would initiate a normal boot. Move the cursor to the kernel entry that you want to start. Press the E key to edit the current entry. Move to the end of the line that starts with linux and press Ctrl+E to jump to the end of the line: To choose an alternate boot target, append the systemd.unit= parameter to the end of the line that starts with linux : Replace <name> with the name of the target unit you want to use. For example, systemd.unit=emergency.target Press Ctrl+X to boot with these settings. 13.4. Shutting down, suspending, and hibernating the system As a system administrator, you can use different power management options to manage power consumption, perform a proper shutdown to ensure that all data is saved, or restart the system to apply changes and updates. 13.4.1. System shutdown To shut down the system, you can either use the systemctl utility directly, or call this utility through the shutdown command. Using the shutdown utility has the following advantages: In RHEL 8, you can schedule a shutdown by using the time argument. This also gives users warning that a system shutdown has been scheduled. 13.4.2. Scheduling a system shutdown As a system administrator, you can schedule a delayed shutdown to give users time to save their work and log off the system. Use the shutdown command to perform the following operations: Shut down the system and power off the machine at a certain time: Where hh:mm is the time in the 24-hour time notation. To prevent new logins, the /run/nologin file is created 5 minutes before system shutdown. When you use the time argument, you can notify users logged in to the system of the planned shutdown by specifying an optional wall message , for example shutdown --poweroff 13:59 "Attention. The system will shut down at 13:59" . Shut down and halt the system after a delay, without powering off the machine: Where +m is the delay time in minutes. You can use the now keyword as an alias for +0 . Cancel a pending shutdown Additional resources shutdown(8) manual page Shutting down the system using the systemctl command 13.4.3. Shutting down the system using the systemctl command As a system administrator, you can shut down the system and power off the machine or shut down and halt the system without powering off the machine by using the systemctl command. Prerequisites Root access Procedure Use the systemctl command to perform any of the following tasks: Shut down the system and power off the machine: Shut down and halt the system without powering off the machine: Note By default, running either of these commands causes systemd to send an informative message to all users that are currently logged into the system. To prevent systemd from sending this message, run the selected command with the --no-wall command line option. 13.4.4. Restarting the system When you restart the system, systemd stops all running programs and services, the system shuts down, and then immediately starts again. Prerequisites You have Root access. Procedure Restart the system: Note By default, when you use this command, systemd sends an informative message to all users that are currently logged into the system. To prevent systemd from sending this message, run this command with the --no-wall option. 13.4.5. Optimizing power consumption by suspending and hibernating the system As a system administrator, you can manage power consumption, save energy on your systems, and preserve the current state of your system. To do so, apply one of the following modes: Suspend Suspending saves the system state in RAM and with the exception of the RAM module, powers off most of the devices in the machine. When you turn the machine back on, the system then restores its state from RAM without having to boot again. Because the system state is saved in RAM and not on the hard disk, restoring the system from suspend mode is significantly faster than from hibernation. However, the suspended system state is also vulnerable to power outages. Hibernate Hibernating saves the system state on the hard disk drive and powers off the machine. When you turn the machine back on, the system then restores its state from the saved data without having to boot again. Because the system state is saved on the hard disk and not in RAM, the machine does not have to maintain electrical power to the RAM module. However, as a consequence, restoring the system from hibernation is significantly slower than restoring it from suspend mode. Hybrid sleep This combines elements of both hibernation and suspending. The system first saves the current state on the the hard disk drive, and enters a low-power state similar to suspending, which allows the system to resume more quickly. The benefit of hybrid sleep is that if the system loses power during the sleep state, it can still recover the state from the saved image on the hard disk, similar to hibernation. Suspend-then-hibernate This mode first suspends the system, which results in saving the current system state to RAM and putting the system in a low-power mode. The system hibernates if it remains suspended for a specific period of time that you can define in the HibernateDelaySec parameter. Hibernation saves the system state to the hard disk drive and shuts down the system completely. The suspend-then-hibernate mode provides the benefit of conserving battery power while you are still able to quickly resume work. Additionally, this mode ensures that your data is saved in case of a power failure. Prerequisites Root access Procedure Choose the appropriate method for power saving: Suspend the system: Hibernate the system: Hibernate and suspend the system: Suspend and then hibernate the system: 13.4.6. Changing the power button behavior When you press the power button on your computer, it suspends or shuts down the system by default. You can customize this behavior according to your preferences. 13.4.6.1. Changing the behavior of the power button when pressing the button and GNOME is not running When you press the power button in a non-graphical systemd target, it shuts down the system by default. You can customize this behavior according to your preferences. Prerequisites Administrative access. Procedure Edit the /etc/systemd/logind.conf configuration file and set the HandlePowerKey=poweroff variable to one of the following options: poweroff Shut down the computer. reboot Reboot the system. halt Initiate a system halt. kexec Initiate a kexec reboot. suspend Suspend the system. hibernate Initiate system hibernation. ignore Do nothing. For example, to reboot the system upon pressing the power button, use this setting: 13.4.6.2. Changing the behavior of the power button when pressing the button and GNOME is running On the graphical login screen or in the graphical user session, pressing the power button suspends the machine by default. This happens both in cases when the user presses the power button physically or when pressing a virtual power button from a remote console. You can select a different power button behavior. Procedure Create a local database for system-wide settings in the /etc/dconf/db/local.d/01-power file with the following content: Replace <value> with one of the following power button actions: nothing Does nothing . suspend Suspends the system. hibernate Hibernates the system. interactive Shows a pop-up query asking the user what to do. With interactive mode, the system powers off automatically after 60 seconds when pressing the power button. However, you can choose a different behavior from the pop-up query. Optional: Override the user's setting, and prevent the user from changing it. Enter the following configuration in the /etc/dconf/db/local.d/locks/01-power file: Update the system databases: Log out and back in again for the system-wide settings to take effect. | [
"DefaultTimeoutStartSec= required value",
"systemctl list-units --type service UNIT LOAD ACTIVE SUB DESCRIPTION abrt-ccpp.service loaded active exited Install ABRT coredump hook abrt-oops.service loaded active running ABRT kernel log watcher abrtd.service loaded active running ABRT Automated Bug Reporting Tool systemd-vconsole-setup.service loaded active exited Setup Virtual Console tog-pegasus.service loaded active running OpenPegasus CIM Server LOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, or a generalization of SUB. SUB = The low-level unit activation state, values depend on unit type. 46 loaded units listed. Pass --all to see loaded but inactive units, too. To show all installed unit files use 'systemctl list-unit-files'",
"systemctl list-units --type service --all",
"systemctl list-unit-files --type service UNIT FILE STATE abrt-ccpp.service enabled abrt-oops.service enabled abrtd.service enabled wpa_supplicant.service disabled ypbind.service disabled 208 unit files listed.",
"systemctl status <name> .service",
"systemctl is-active <name> .service",
"systemctl is-enabled <name> .service",
"systemctl list-dependencies --after <name> .service",
"systemctl list-dependencies --after gdm.service gdm.service ├─dbus.socket ├─[email protected] ├─livesys.service ├─plymouth-quit.service ├─system.slice ├─systemd-journald.socket ├─systemd-user-sessions.service └─basic.target [output truncated]",
"systemctl list-dependencies --before <name> .service",
"systemctl list-dependencies --before gdm.service gdm.service ├─dracut-shutdown.service ├─graphical.target │ ├─systemd-readahead-done.service │ ├─systemd-readahead-done.timer │ └─systemd-update-utmp-runlevel.service └─shutdown.target ├─systemd-reboot.service └─final.target └─systemd-reboot.service",
"*systemctl start <systemd_unit> *",
"systemctl stop <name> .service",
"systemctl restart <name> .service",
"systemctl try-restart <name> .service",
"systemctl reload <name> .service",
"systemctl status <systemd_unit>",
"systemctl unmask <systemd_unit>",
"systemctl enable <systemd_unit>",
"systemctl disable <name> .service",
"systemctl mask <name> .service",
"systemctl get-default graphical.target",
"systemctl list-units --type target",
"systemctl set-default <name> .target",
"Example: systemctl set-default multi-user.target Removed /etc/systemd/system/default.target Created symlink /etc/systemd/system/default.target -> /usr/lib/systemd/system/multi-user.target",
"systemctl get-default multi-user.target",
"systemctl isolate default.target",
"systemctl list-units --type target",
"systemctl isolate <name> .target",
"Example: systemctl isolate multi-user.target",
"systemctl rescue Broadcast message from root@localhost on pts/0 (Fri 2023-03-24 18:23:15 CEST): The system is going down to rescue mode NOW!",
"systemctl --no-wall rescue",
"linux (USDroot)/vmlinuz-5.14.0-70.22.1.e19_0.x86_64 root=/dev/mapper/rhel-root ro crash kernel=auto resume=/dev/mapper/rhel-swap rd.lvm.lv/swap rhgb quiet",
"linux (USDroot)/vmlinuz-5.14.0-70.22.1.e19_0.x86_64 root=/dev/mapper/rhel-root ro crash kernel=auto resume=/dev/mapper/rhel-swap rd.lvm.lv/swap rhgb quiet systemd.unit= <name> .target",
"shutdown --poweroff hh:mm",
"shutdown --halt +m",
"shutdown -c",
"systemctl poweroff",
"systemctl halt",
"systemctl reboot",
"systemctl suspend",
"systemctl hibernate",
"systemctl hybrid-sleep",
"systemctl suspend-then-hibernate",
"HandlePowerKey=reboot",
"[org/gnome/settings-daemon/plugins/power] power-button-action=<value>",
"/org/gnome/settings-daemon/plugins/power/power-button-action",
"dconf update"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_basic_system_settings/managing-systemd_configuring-basic-system-settings |
Chapter 9. Installing a cluster on IBM Cloud in a disconnected environment | Chapter 9. Installing a cluster on IBM Cloud in a disconnected environment In OpenShift Container Platform 4.18, you can install a cluster in a restricted network by creating an internal mirror of the installation release content that is accessible to an existing Virtual Private Cloud (VPC) on IBM Cloud(R). 9.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You configured an IBM Cloud account to host the cluster. You have a container image registry that is accessible to the internet and your restricted network. The container image registry should mirror the contents of the OpenShift image registry and contain the installation media. For more information, see Mirroring images for a disconnected installation by using the oc-mirror plugin v2 . You have an existing VPC on IBM Cloud(R) that meets the following requirements: The VPC contains the mirror registry or has firewall rules or a peering connection to access the mirror registry that is hosted elsewhere. The VPC can access IBM Cloud(R) service endpoints using a public endpoint. If network restrictions limit access to public service endpoints, evaluate those services for alternate endpoints that might be available. For more information see Access to IBM service endpoints . You cannot use the VPC that the installation program provisions by default. If you plan on configuring endpoint gateways to use IBM Cloud(R) Virtual Private Endpoints, consider the following requirements: Endpoint gateway support is currently limited to the us-east and us-south regions. The VPC must allow traffic to and from the endpoint gateways. You can use the VPC's default security group, or a new security group, to allow traffic on port 443. For more information, see Allowing endpoint gateway traffic . If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring IAM for IBM Cloud . 9.2. About installations in restricted networks In OpenShift Container Platform 4.18, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. 9.2.1. Required internet access and an installation host You complete the installation using a bastion host or portable device that can access both the internet and your closed network. You must use a host with internet access to: Download the installation program, the OpenShift CLI ( oc ), and the CCO utility ( ccoctl ). Use the installation program to locate the Red Hat Enterprise Linux CoreOS (RHCOS) image and create the installation configuration file. Use oc to extract ccoctl from the CCO container image. Use oc and ccoctl to configure IAM for IBM Cloud(R). 9.2.2. Access to a mirror registry To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your restricted network, or by using other methods that meet your organization's security restrictions. For more information on mirroring images for a disconnected installation, see "Additional resources". 9.2.3. Access to IBM service endpoints The installation program requires access to the following IBM Cloud(R) service endpoints: Cloud Object Storage DNS Services Global Search Global Tagging Identity Services Resource Controller Resource Manager VPC Note If you are specifying an IBM(R) Key Protect for IBM Cloud(R) root key as part of the installation process, the service endpoint for Key Protect is also required. By default, the public endpoint is used to access the service. If network restrictions limit access to public service endpoints, you can override the default behavior. Before deploying the cluster, you can update the installation configuration file ( install-config.yaml ) to specify the URI of an alternate service endpoint. For more information on usage, see "Additional resources". 9.2.4. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. Additional resources Mirroring images for a disconnected installation by using the oc-mirror plugin v2 Additional IBM Cloud configuration parameters 9.3. About using a custom VPC In OpenShift Container Platform 4.18, you can deploy a cluster into the subnets of an existing IBM(R) Virtual Private Cloud (VPC). Deploying OpenShift Container Platform into an existing VPC can help you avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are in your existing subnets, it cannot choose subnet CIDRs and so forth. You must configure networking for the subnets to which you will install the cluster. 9.3.1. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create the following components: NAT gateways Subnets Route tables VPC network The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 9.3.2. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to the existing VPC. As part of the installation, specify the following in the install-config.yaml file: The name of the existing resource group that contains the VPC and subnets ( networkResourceGroupName ) The name of the existing VPC ( vpcName ) The subnets that were created for control plane machines and compute machines ( controlPlaneSubnets and computeSubnets ) Note Additional installer-provisioned cluster resources are deployed to a separate resource group ( resourceGroupName ). You can specify this resource group before installing the cluster. If undefined, a new resource group is created for the cluster. To ensure that the subnets that you provide are suitable, the installation program confirms the following: All of the subnets that you specify exist. For each availability zone in the region, you specify: One subnet for control plane machines. One subnet for compute machines. The machine CIDR that you specified contains the subnets for the compute machines and control plane machines. Note Subnet IDs are not supported. 9.3.3. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed to the entire network. TCP port 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 9.3.4. Allowing endpoint gateway traffic If you are using IBM Cloud(R) Virtual Private endpoints, your Virtual Private Cloud (VPC) must be configured to allow traffic to and from the endpoint gateways. A VPC's default security group is configured to allow all outbound traffic to endpoint gateways. Therefore, the simplest way to allow traffic between your VPC and endpoint gateways is to modify the default security group to allow inbound traffic on port 443. Note If you choose to configure a new security group, the security group must be configured to allow both inbound and outbound traffic. Prerequisites You have installed the IBM Cloud(R) Command Line Interface utility ( ibmcloud ). Procedure Obtain the identifier for the default security group by running the following command: USD DEFAULT_SG=USD(ibmcloud is vpc <your_vpc_name> --output JSON | jq -r '.default_security_group.id') Add a rule that allows inbound traffic on port 443 by running the following command: USD ibmcloud is security-group-rule-add USDDEFAULT_SG inbound tcp --remote 0.0.0.0/0 --port-min 443 --port-max 443 Note Be sure that your endpoint gateways are configured to use this security group. 9.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 9.5. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IC_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 9.6. Downloading the RHCOS cluster image The installation program requires the Red Hat Enterprise Linux CoreOS (RHCOS) image to install the cluster. While optional, downloading the Red Hat Enterprise Linux CoreOS (RHCOS) before deploying removes the need for internet access when creating the cluster. Use the installation program to locate and download the Red Hat Enterprise Linux CoreOS (RHCOS) image. Prerequisites The host running the installation program has internet access. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install coreos print-stream-json Use the output of the command to find the location of the IBM Cloud(R) image. .Example output ---- "release": "415.92.202311241643-0", "formats": { "qcow2.gz": { "disk": { "location": "https://rhcos.mirror.openshift.com/art/storage/prod/streams/4.15-9.2/builds/415.92.202311241643-0/x86_64/rhcos-415.92.202311241643-0-ibmcloud.x86_64.qcow2.gz", "sha256": "6b562dee8431bec3b93adeac1cfefcd5e812d41e3b7d78d3e28319870ffc9eae", "uncompressed-sha256": "5a0f9479505e525a30367b6a6a6547c86a8f03136f453c1da035f3aa5daa8bc9" ---- Download and extract the image archive. Make the image available on the host that the installation program uses to create the cluster. 9.7. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You have the imageContentSourcePolicy.yaml file that was created when you mirrored your registry. You have obtained the contents of the certificate for your mirror registry. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . When customizing the sample template, be sure to provide the information that is required for an installation in a restricted network: Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the network and subnets for the VPC to install the cluster in under the parent platform.ibmcloud field: vpcName: <existing_vpc> controlPlaneSubnets: <control_plane_subnet> computeSubnets: <compute_subnet> For platform.ibmcloud.vpcName , specify the name for the existing IBM Cloud Virtual Private Cloud (VPC) network. For platform.ibmcloud.controlPlaneSubnets and platform.ibmcloud.computeSubnets , specify the existing subnets to deploy the control plane machines and compute machines, respectively. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSourcePolicy.yaml file that was created when you mirrored the registry. If network restrictions limit the use of public endpoints to access the required IBM Cloud(R) services, add the serviceEndpoints stanza to platform.ibmcloud to specify an alternate service endpoint. Note You can specify only one alternate service endpoint for each service. Example of using alternate services endpoints # ... serviceEndpoints: - name: IAM url: <iam_alternate_endpoint_url> - name: VPC url: <vpc_alternate_endpoint_url> - name: ResourceController url: <resource_controller_alternate_endpoint_url> - name: ResourceManager url: <resource_manager_alternate_endpoint_url> - name: DNSServices url: <dns_services_alternate_endpoint_url> - name: COS url: <cos_alternate_endpoint_url> - name: GlobalSearch url: <global_search_alternate_endpoint_url> - name: GlobalTagging url: <global_tagging_alternate_endpoint_url> # ... Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Note If you use the default value of External , your network must be able to access the public endpoint for IBM Cloud(R) Internet Services (CIS). CIS is not enabled for Virtual Private Endpoints. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 9.7.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources Installation configuration parameters for IBM Cloud(R) 9.7.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 9.1. Minimum resource requirements Machine Operating System vCPU Virtual RAM Storage Input/Output Per Second (IOPS) Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 Note For OpenShift Container Platform version 4.18, RHCOS is based on RHEL version 9.4, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 9.7.3. Tested instance types for IBM Cloud The following IBM Cloud(R) instance types have been tested with OpenShift Container Platform. Example 9.1. Machine series bx2-8x32 bx2d-4x16 bx3d-4x20 cx2-8x16 cx2d-4x8 cx3d-8x20 gx2-8x64x1v100 gx3-16x80x1l4 gx3d-160x1792x8h100 mx2-8x64 mx2d-4x32 mx3d-4x40 ox2-8x64 ux2d-2x56 vx2d-4x56 9.7.4. Sample customized install-config.yaml file for IBM Cloud You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and then modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibm-cloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 10 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: us-east 12 resourceGroupName: us-east-example-cluster-rg 13 serviceEndpoints: 14 - name: IAM url: https://private.us-east.iam.cloud.ibm.com - name: VPC url: https://us-east.private.iaas.cloud.ibm.com/v1 - name: ResourceController url: https://private.us-east.resource-controller.cloud.ibm.com - name: ResourceManager url: https://private.us-east.resource-controller.cloud.ibm.com - name: DNSServices url: https://api.private.dns-svcs.cloud.ibm.com/v1 - name: COS url: https://s3.direct.us-east.cloud-object-storage.appdomain.cloud - name: GlobalSearch url: https://api.private.global-search-tagging.cloud.ibm.com - name: GlobalTagging url: https://tags.private.global-search-tagging.cloud.ibm.com networkResourceGroupName: us-east-example-existing-network-rg 15 vpcName: us-east-example-network-1 16 controlPlaneSubnets: 17 - us-east-example-network-1-cp-us-east-1 - us-east-example-network-1-cp-us-east-2 - us-east-example-network-1-cp-us-east-3 computeSubnets: 18 - us-east-example-network-1-compute-us-east-1 - us-east-example-network-1-compute-us-east-2 - us-east-example-network-1-compute-us-east-3 credentialsMode: Manual pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 additionalTrustBundle: | 22 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 23 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 8 12 Required. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 7 Enables or disables simultaneous multithreading, also known as Hyper-Threading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 9 The machine CIDR must contain the subnets for the compute machines and control plane machines. 10 The CIDR must contain the subnets defined in platform.ibmcloud.controlPlaneSubnets and platform.ibmcloud.computeSubnets . 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 13 The name of an existing resource group. All installer-provisioned cluster resources are deployed to this resource group. If undefined, a new resource group is created for the cluster. 14 Based on the network restrictions of the VPC, specify alternate service endpoints as needed. This overrides the default public endpoint for the service. 15 Specify the name of the resource group that contains the existing virtual private cloud (VPC). The existing VPC and subnets should be in this resource group. The cluster will be installed to this VPC. 16 Specify the name of an existing VPC. 17 Specify the name of the existing subnets to which to deploy the control plane machines. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 18 Specify the name of the existing subnets to which to deploy the compute machines. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 19 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000. For <credentials> , specify the base64-encoded user name and password for your mirror registry. 20 Enables or disables FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated or Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 21 Optional: provide the sshKey value that you use to access the machines in your cluster. 22 Provide the contents of the certificate file that you used for your mirror registry. 23 Provide these values from the metadata.name: release-0 section of the imageContentSourcePolicy.yaml file that was created when you mirrored the registry. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 9.8. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.18. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.18 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 9.9. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 9.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. If the Red Hat Enterprise Linux CoreOS (RHCOS) image is available locally, the host running the installation program does not require internet access. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Export the OPENSHIFT_INSTALL_OS_IMAGE_OVERRIDE variable to specify the location of the Red Hat Enterprise Linux CoreOS (RHCOS) image by running the following command: USD export OPENSHIFT_INSTALL_OS_IMAGE_OVERRIDE="<path_to_image>/rhcos-<image_version>-ibmcloud.x86_64.qcow2.gz" Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 9.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 9.12. Post installation Complete the following steps to complete the configuration of your cluster. 9.12.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 9.12.2. Installing the policy resources into the cluster Mirroring the OpenShift Container Platform content using the oc-mirror OpenShift CLI (oc) plugin creates resources, which include catalogSource-certified-operator-index.yaml and imageContentSourcePolicy.yaml . The ImageContentSourcePolicy resource associates the mirror registry with the source registry and redirects image pull requests from the online registries to the mirror registry. The CatalogSource resource is used by Operator Lifecycle Manager (OLM) Classic to retrieve information about the available Operators in the mirror registry, which lets users discover and install Operators. Note OLM v1 uses the ClusterCatalog resource to retrieve information about the available cluster extensions in the mirror registry. The oc-mirror plugin v1 does not generate ClusterCatalog resources automatically; you must manually create them. The oc-mirror plugin v2 does, however, generate ClusterCatalog resources automatically. For more information on creating and applying ClusterCatalog resources, see "Adding a catalog to a cluster" in "Extensions". After you install the cluster, you must install these resources into the cluster. Prerequisites You have mirrored the image set to the registry mirror in the disconnected environment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift CLI as a user with the cluster-admin role. Apply the YAML files from the results directory to the cluster: USD oc apply -f ./oc-mirror-workspace/results-<id>/ Verification Verify that the ImageContentSourcePolicy resources were successfully installed: USD oc get imagecontentsourcepolicy Verify that the CatalogSource resources were successfully installed: USD oc get catalogsource --all-namespaces 9.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.18, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 9.14. steps Customize your cluster . Optional: Opt out of remote health reporting . | [
"DEFAULT_SG=USD(ibmcloud is vpc <your_vpc_name> --output JSON | jq -r '.default_security_group.id')",
"ibmcloud is security-group-rule-add USDDEFAULT_SG inbound tcp --remote 0.0.0.0/0 --port-min 443 --port-max 443",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"export IC_API_KEY=<api_key>",
"./openshift-install coreos print-stream-json",
".Example output ---- \"release\": \"415.92.202311241643-0\", \"formats\": { \"qcow2.gz\": { \"disk\": { \"location\": \"https://rhcos.mirror.openshift.com/art/storage/prod/streams/4.15-9.2/builds/415.92.202311241643-0/x86_64/rhcos-415.92.202311241643-0-ibmcloud.x86_64.qcow2.gz\", \"sha256\": \"6b562dee8431bec3b93adeac1cfefcd5e812d41e3b7d78d3e28319870ffc9eae\", \"uncompressed-sha256\": \"5a0f9479505e525a30367b6a6a6547c86a8f03136f453c1da035f3aa5daa8bc9\" ----",
"mkdir <installation_directory>",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"vpcName: <existing_vpc> controlPlaneSubnets: <control_plane_subnet> computeSubnets: <compute_subnet>",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"serviceEndpoints: - name: IAM url: <iam_alternate_endpoint_url> - name: VPC url: <vpc_alternate_endpoint_url> - name: ResourceController url: <resource_controller_alternate_endpoint_url> - name: ResourceManager url: <resource_manager_alternate_endpoint_url> - name: DNSServices url: <dns_services_alternate_endpoint_url> - name: COS url: <cos_alternate_endpoint_url> - name: GlobalSearch url: <global_search_alternate_endpoint_url> - name: GlobalTagging url: <global_tagging_alternate_endpoint_url>",
"publish: Internal",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibm-cloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 10 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: us-east 12 resourceGroupName: us-east-example-cluster-rg 13 serviceEndpoints: 14 - name: IAM url: https://private.us-east.iam.cloud.ibm.com - name: VPC url: https://us-east.private.iaas.cloud.ibm.com/v1 - name: ResourceController url: https://private.us-east.resource-controller.cloud.ibm.com - name: ResourceManager url: https://private.us-east.resource-controller.cloud.ibm.com - name: DNSServices url: https://api.private.dns-svcs.cloud.ibm.com/v1 - name: COS url: https://s3.direct.us-east.cloud-object-storage.appdomain.cloud - name: GlobalSearch url: https://api.private.global-search-tagging.cloud.ibm.com - name: GlobalTagging url: https://tags.private.global-search-tagging.cloud.ibm.com networkResourceGroupName: us-east-example-existing-network-rg 15 vpcName: us-east-example-network-1 16 controlPlaneSubnets: 17 - us-east-example-network-1-cp-us-east-1 - us-east-example-network-1-cp-us-east-2 - us-east-example-network-1-cp-us-east-3 computeSubnets: 18 - us-east-example-network-1-compute-us-east-1 - us-east-example-network-1-compute-us-east-2 - us-east-example-network-1-compute-us-east-3 credentialsMode: Manual pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 additionalTrustBundle: | 22 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 23 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled",
"./openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"export OPENSHIFT_INSTALL_OS_IMAGE_OVERRIDE=\"<path_to_image>/rhcos-<image_version>-ibmcloud.x86_64.qcow2.gz\"",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc apply -f ./oc-mirror-workspace/results-<id>/",
"oc get imagecontentsourcepolicy",
"oc get catalogsource --all-namespaces"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_ibm_cloud/installing-ibm-cloud-restricted |
Release Notes for Streams for Apache Kafka 2.9 on RHEL | Release Notes for Streams for Apache Kafka 2.9 on RHEL Red Hat Streams for Apache Kafka 2.9 Highlights of what's new and what's changed with this release of Streams for Apache Kafka on Red Hat Enterprise Linux | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/release_notes_for_streams_for_apache_kafka_2.9_on_rhel/index |
17.7. Changing the Trust Settings of a CA Certificate | 17.7. Changing the Trust Settings of a CA Certificate Certificate System subsystems use the CA certificates in their certificate databases to validate certificates received during an SSL-enabled communication. It can be necessary to change the trust settings on a CA stored in the certificate database, temporarily or permanently. For example, if there is a problem with access or compromised certificates, marking the CA certificate as untrusted prevents entities with certificates signed by that CA from authenticating to the Certificate System. When the problem is resolved, the CA can be marked as trusted again. To untrust a CA permanently, consider removing its certificate from the trust database. For instructions, see Section 17.6.3, "Deleting Certificates from the Database" . 17.7.1. Changing Trust Settings through the Console Note pkiconsole is being deprecated. To change the trust setting of a CA certificate, do the following: Open the subsystem console. In the Configuration tab, System Keys and Certificates from the left navigation tree. Select the CA certificates tab. Select the CA certificate to modify, and click Edit . A prompt opens which reads The Certificate chain is (un)trusted, are you sure you want to (un)trust it? Clicking yes changes the trust setting of the certificate chain; pressing no preserves the original trust relationship. 17.7.2. Changing Trust Settings Using certutil To change the trust setting of a certificate using certutil , do the following: Open the instance's certificate databases directory. List the certificates in the database by running the certutil with the -L option. For example: Change the trust settings for the certificate by running the certutil with the -M option. For example: List the certificates again to confirm that the certificate trust was changed. For information about using the certutil command, see http://www.mozilla.org/projects/security/pki/nss/tools/certutil.html . | [
"pkiconsole https://server.example.com: secure_port / subsystem_type",
"cd /var/lib/pki/ instance_name /alias",
"certutil -L -d . Certificate Authority - Example Domain CT,c, subsystemCert cert- instance_name u,u,u Server-Cert cert- instance_name u,u,u",
"certutil -M -n cert_nickname -t trust -d .",
"certutil -M -n \"Certificate Authority - Example Domain\" -t TCu,TCu,TCu -d .",
"certutil -L -d . Certificate Authority - Example Domain CTu,CTu,CTu subsystemCert cert- instance_name u,u,u Server-Cert cert- instance_name u,u,u"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/changing_the_trust_settings_of_a_ca_certificate |
2.8.9. IPTables | 2.8.9. IPTables Included with Red Hat Enterprise Linux are advanced tools for network packet filtering - the process of controlling network packets as they enter, move through, and exit the network stack within the kernel. Kernel versions prior to 2.4 relied on ipchains for packet filtering and used lists of rules applied to packets at each step of the filtering process. The 2.4 kernel introduced iptables (also called netfilter ), which is similar to ipchains but greatly expands the scope and control available for filtering network packets. This chapter focuses on packet filtering basics, explains various options available with iptables commands, and explains how filtering rules can be preserved between system reboots. Important The default firewall mechanism in the 2.4 and later kernels is iptables , but iptables cannot be used if ipchains is already running. If ipchains is present at boot time, the kernel issues an error and fails to start iptables . The functionality of ipchains is not affected by these errors. 2.8.9.1. Packet Filtering The Linux kernel uses the Netfilter facility to filter packets, allowing some of them to be received by or pass through the system while stopping others. This facility is built in to the Linux kernel, and has five built-in tables or rules lists , as follows: filter - The default table for handling network packets. nat - Used to alter packets that create a new connection and used for Network Address Translation ( NAT ). mangle - Used for specific types of packet alteration. raw - Used mainly for configuring exemptions from connection tracking in combination with the NOTRACK target. security - Used for Mandatory Access Control (MAC) networking rules, such as those enabled by the SECMARK and CONNSECMARK targets. Each table has a group of built-in chains , which correspond to the actions performed on the packet by netfilter . The built-in chains for the filter table are as follows: INPUT - Applies to network packets that are targeted for the host. OUTPUT - Applies to locally-generated network packets. FORWARD - Applies to network packets routed through the host. The built-in chains for the nat table are as follows: PREROUTING - Applies to network packets when they arrive. OUTPUT - Applies to locally-generated network packets before they are sent out. POSTROUTING - Applies to network packets before they are sent out. The built-in chains for the mangle table are as follows: INPUT - Applies to network packets targeted for the host. OUTPUT - Applies to locally-generated network packets before they are sent out. FORWARD - Applies to network packets routed through the host. PREROUTING - Applies to incoming network packets before they are routed. POSTROUTING - Applies to network packets before they are sent out. The built-in chains for the raw table are as follows: OUTPUT - Applies to locally-generated network packets before they are sent out. PREROUTING - Applies to incoming network packets before they are routed. The built-in chains for the security table are as follows: INPUT - Applies to network packets targeted for the host. OUTPUT - Applies to locally-generated network packets before they are sent out. FORWARD - Applies to network packets routed through the host. Every network packet received by or sent from a Linux system is subject to at least one table. However, a packet may be subjected to multiple rules within each table before emerging at the end of the chain. The structure and purpose of these rules may vary, but they usually seek to identify a packet coming from or going to a particular IP address, or set of addresses, when using a particular protocol and network service. The following image outlines how the flow of packets is examined by the iptables subsystem: Figure 2.6. Packet filtering in IPTables Important By default, firewall rules are saved in the /etc/sysconfig/iptables or /etc/sysconfig/ip6tables files. The iptables service starts before any DNS-related services when a Linux system is booted. This means that firewall rules can only reference numeric IP addresses (for example, 192.168.0.1). Domain names (for example, host.example.com) in such rules produce errors. Regardless of their destination, when packets match a particular rule in one of the tables, a target or action is applied to them. If the rule specifies an ACCEPT target for a matching packet, the packet skips the rest of the rule checks and is allowed to continue to its destination. If a rule specifies a DROP target, that packet is refused access to the system and nothing is sent back to the host that sent the packet. If a rule specifies a QUEUE target, the packet is passed to user-space. If a rule specifies the optional REJECT target, the packet is dropped, but an error packet is sent to the packet's originator. Every chain has a default policy to ACCEPT , DROP , REJECT , or QUEUE . If none of the rules in the chain apply to the packet, then the packet is dealt with in accordance with the default policy. The iptables command configures these tables, as well as sets up new tables if necessary. Note The netfilter modules are not loaded by default. Therefore a user will not see all of them by looking in the /proc/ directory as it only shows what is being used or has been loaded already. This means that there is no way to see what features of netfilter are available before you attempt to use it. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-Security_Guide-IPTables |
3.7. Configuration Validation | 3.7. Configuration Validation The cluster configuration is automatically validated according to the cluster schema at /usr/share/cluster/cluster.rng during startup time and when a configuration is reloaded. Also, you can validate a cluster configuration any time by using the ccs_config_validate command. For information on configuration validation when using the ccs command, see Section 6.1.6, "Configuration Validation" . An annotated schema is available for viewing at /usr/share/doc/cman-X.Y.ZZ/cluster_conf.html (for example /usr/share/doc/cman-3.0.12/cluster_conf.html ). Configuration validation checks for the following basic errors: XML validity - Checks that the configuration file is a valid XML file. Configuration options - Checks to make sure that options (XML elements and attributes) are valid. Option values - Checks that the options contain valid data (limited). The following examples show a valid configuration and invalid configurations that illustrate the validation checks: Valid configuration - Example 3.3, " cluster.conf Sample Configuration: Valid File" Invalid XML - Example 3.4, " cluster.conf Sample Configuration: Invalid XML" Invalid option - Example 3.5, " cluster.conf Sample Configuration: Invalid Option" Invalid option value - Example 3.6, " cluster.conf Sample Configuration: Invalid Option Value" Example 3.3. cluster.conf Sample Configuration: Valid File Example 3.4. cluster.conf Sample Configuration: Invalid XML In this example, the last line of the configuration (annotated as "INVALID" here) is missing a slash - it is <cluster> instead of </cluster> . Example 3.5. cluster.conf Sample Configuration: Invalid Option In this example, the second line of the configuration (annotated as "INVALID" here) contains an invalid XML element - it is loging instead of logging . Example 3.6. cluster.conf Sample Configuration: Invalid Option Value In this example, the fourth line of the configuration (annotated as "INVALID" here) contains an invalid value for the XML attribute, nodeid in the clusternode line for node-01.example.com . The value is a negative value ("-1") instead of a positive value ("1"). For the nodeid attribute, the value must be a positive value. | [
"<cluster name=\"mycluster\" config_version=\"1\"> <logging debug=\"off\"/> <clusternodes> <clusternode name=\"node-01.example.com\" nodeid=\"1\"> <fence> </fence> </clusternode> <clusternode name=\"node-02.example.com\" nodeid=\"2\"> <fence> </fence> </clusternode> <clusternode name=\"node-03.example.com\" nodeid=\"3\"> <fence> </fence> </clusternode> </clusternodes> <fencedevices> </fencedevices> <rm> </rm> </cluster>",
"<cluster name=\"mycluster\" config_version=\"1\"> <logging debug=\"off\"/> <clusternodes> <clusternode name=\"node-01.example.com\" nodeid=\"1\"> <fence> </fence> </clusternode> <clusternode name=\"node-02.example.com\" nodeid=\"2\"> <fence> </fence> </clusternode> <clusternode name=\"node-03.example.com\" nodeid=\"3\"> <fence> </fence> </clusternode> </clusternodes> <fencedevices> </fencedevices> <rm> </rm> <cluster> <----------------INVALID",
"<cluster name=\"mycluster\" config_version=\"1\"> <loging debug=\"off\"/> <----------------INVALID <clusternodes> <clusternode name=\"node-01.example.com\" nodeid=\"1\"> <fence> </fence> </clusternode> <clusternode name=\"node-02.example.com\" nodeid=\"2\"> <fence> </fence> </clusternode> <clusternode name=\"node-03.example.com\" nodeid=\"3\"> <fence> </fence> </clusternode> </clusternodes> <fencedevices> </fencedevices> <rm> </rm> <cluster>",
"<cluster name=\"mycluster\" config_version=\"1\"> <loging debug=\"off\"/> <clusternodes> <clusternode name=\"node-01.example.com\" nodeid=\"-1\"> <--------INVALID <fence> </fence> </clusternode> <clusternode name=\"node-02.example.com\" nodeid=\"2\"> <fence> </fence> </clusternode> <clusternode name=\"node-03.example.com\" nodeid=\"3\"> <fence> </fence> </clusternode> </clusternodes> <fencedevices> </fencedevices> <rm> </rm> <cluster>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-validation-CA |
8.224. util-linux-ng | 8.224. util-linux-ng 8.224.1. RHBA-2013:1648 - util-linux-ng bug fix and enhancement update Updated util-linux-ng packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The util-linux-ng packages contain a large variety of low-level system utilities that are necessary for a Linux operating system to function. Bug Fixes BZ# 885313 Previously, the hexdump utility terminated with a segmentation fault when iterating over an empty format string. This bug has now been fixed and hexdump no longer crashes in this scenario. BZ# 911756 Previously, the libblkid library incorrectly detected certain disks as a Silicon Image Medley RAID device. Consequently, this caused problems in certain systems after a weekly reboot. This update adds a checksum count from the superblock record and a new superblock definition from the dmraid tool, which makes the signature recognition of Silicon Image Medley RAID devices more robust. BZ# 864585 Previously, the "mount -av" command, which triggers mounting filesystems with helpers like the /sbin/mount.nfs file, printed the message "nothing was mounted", even though the helper mounted a filesystem. This bug has been fixed and the incorrect message is no longer printed in this scenario. BZ# 872291 Previously, the hwclock(8) manual page contained a reference to the non-existing adjtimex utility. This update fixes the hwclock(8) manual page. BZ# 915844 Previously, the mount(8) manual page incorrectly described the "relatime" mount option. With this update, the description of the "relatime" mount option has been improved to better describe when the kernel updates the atime. BZ# 917678 Due to a regression in the code, if a symbolic link was used for a mount point in the /etc/fstab configuration file, mount attempts to that mount point failed. This update ensures that all paths in /etc/fstab are made canonical and such mount points can now be mounted as expected. BZ# 966735 Prior to this update, the lscpu command accepted only sequentially assigned logical CPU numbers. Consequently, lscpu did not properly list CPUs after a CPU eject operation. After this update, the lascpu command does not expect sequentially assigned CPU numbers and works properly on systems with a hot-plug CPU. Enhancements BZ# 816342 Previously, it was not possible to determine the right CLOCAL flag by the kernel and, also, some machines required manual settings. With this update, the new -L[={always,auto,never}] option has been added to the agetty utility to allow complete control on the CLOCAL terminal flag. BZ# 846790 Previously, the kill(1) manual page did not include information about the interaction between the kill utility and threads. With this update, the kill(1) manual page has been improved to explicitly explain the interaction between the kill system call and threads. BZ# 870854 The default kill character "@" was in collision with login user names on IPA systems with the "user@domain" convention. With this update, the agetty utility has been improved to accept the "--kill-chars" and "--erase-chars" options to control special kill and erase terminal characters. BZ# 947062 With this update, the "blkdiscard" command has been introduced to Red Hat Enterprise Linux 6 to discard device sectors. The "discard" support is important for example on thinly-provisioned storage to improve disk efficiency by reclaiming free space so that the storage can re-use the free space for other area. Users of util-linux-ng are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/util-linux-ng |
8.9. pNFS | 8.9. pNFS Support for Parallel NFS (pNFS) as part of the NFS v4.1 standard is available as of Red Hat Enterprise Linux 6.4. The pNFS architecture improves the scalability of NFS, with possible improvements to performance. That is, when a server implements pNFS as well, a client is able to access data through multiple servers concurrently. It supports three storage protocols or layouts: files, objects, and blocks. Note The protocol allows for three possible pNFS layout types: files, objects, and blocks. While the Red Hat Enterprise Linux 6.4 client only supported the files layout type, Red Hat Enterprise Linux 7 supports the files layout type, with objects and blocks layout types being included as a technology preview. pNFS Flex Files Flexible Files is a new layout for pNFS that enables the aggregation of standalone NFSv3 and NFSv4 servers into a scale out name space. The Flex Files feature is part of the NFSv4.2 standard as described in the RFC 7862 specification. Red Hat Enterprise Linux can mount NFS shares from Flex Files servers since Red Hat Enterprise Linux 7.4. Mounting pNFS Shares To enable pNFS functionality, mount shares from a pNFS-enabled server with NFS version 4.1 or later: After the server is pNFS-enabled, the nfs_layout_nfsv41_files kernel is automatically loaded on the first mount. The mount entry in the output should contain minorversion=1 . Use the following command to verify the module was loaded: To mount an NFS share with the Flex Files feature from a server that supports Flex Files, use NFS version 4.2 or later: Verify that the nfs_layout_flexfiles module has been loaded: Additional Resources For more information on pNFS, refer to: http://www.pnfs.com . | [
"mount -t nfs -o v4.1 server:/remote-export /local-directory",
"lsmod | grep nfs_layout_nfsv41_files",
"mount -t nfs -o v4.2 server:/remote-export /local-directory",
"lsmod | grep nfs_layout_flexfiles"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/nfs-pnfs |
Chapter 2. The pcsd Web UI | Chapter 2. The pcsd Web UI This chapter provides an overview of configuring a Red Hat High Availability cluster with the pcsd Web UI. 2.1. pcsd Web UI Setup To set up your system to use the pcsd Web UI to configure a cluster, use the following procedure. Install the Pacemaker configuration tools, as described in Section 1.2, "Installing Pacemaker configuration tools" . On each node that will be part of the cluster, use the passwd command to set the password for user hacluster , using the same password on each node. Start and enable the pcsd daemon on each node: On one node of the cluster, authenticate the nodes that will constitute the cluster with the following command. After executing this command, you will be prompted for a Username and a Password . Specify hacluster as the Username . On any system, open a browser to the following URL, specifying one of the nodes you have authorized (note that this uses the https protocol). This brings up the pcsd Web UI login screen. Log in as user hacluster . This brings up the Manage Clusters page as shown in Figure 2.1, "Manage Clusters page" . Figure 2.1. Manage Clusters page | [
"systemctl start pcsd.service systemctl enable pcsd.service",
"pcs cluster auth node1 node2 ... nodeN",
"https:// nodename :2224"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/ch-pcsd-HAAR |
Chapter 10. General Configuration | Chapter 10. General Configuration 10.1. JBoss Data Virtualization Settings The following types of JBoss Data Virtualization settings are available for viewing and modification: Buffer service settings Cache settings (including result set and prepared plan cache settings) Runtime engine deployer settings Authorization validator and policy decider settings Transport and SSL settings Translator settings To view all of the available settings for JBoss Data Virtualization, run the following command within the Management CLI: Note To look further into translator and transport (including SSL) settings see Section 10.4, "Managing Transport and SSL Settings Using Management CLI" and Section 10.5, "Managing Translator Settings Using Management CLI" . | [
"/subsystem=teiid:read-resource-description"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/chap-general_configuration |
Chapter 21. Replacing Networker nodes | Chapter 21. Replacing Networker nodes In certain circumstances a Red Hat OpenStack Platform (RHOSP) node with a Networker profile in a high availability cluster might fail. (For more information, see Tagging nodes into profiles in the Director Installation and Usage guide.) In these situations, you must remove the node from the cluster and replace it with a new Networker node that runs the Networking service (neutron) agents. The topics in this section are: Section 21.1, "Preparing to replace network nodes" Section 21.2, "Replacing a Networker node" Section 21.3, "Rescheduling nodes and cleaning up the Networking service" 21.1. Preparing to replace network nodes Replacing a Networker node on a Red Hat OpenStack Platform (RHOSP) overcloud, requires that you perform several preparation steps. Completing all of the required preparation steps helps you to avoid complications during the Networker node replacement process. Prerequisites Your RHOSP deployment is highly available with three or more Networker nodes. Procedure Log in to your undercloud as the stack user. Source the undercloud credentials file: Check the current status of the overcloud stack on the undercloud: The overcloud stack and its subsequent child stacks should have a status of either CREATE_COMPLETE or UPDATE_COMPLETE . Ensure that you have a recent backup image of the undercloud node by running the Relax-and-Recover tool. For more information, see the Backing up and restoring the undercloud and control plane nodes guide. Log in to a Controller node as root. Open an interactive bash shell on the container and check the status of the Galera cluster: Ensure that the Controller nodes are in Master mode. Sample output Log on to the RHOSP director node and check the nova-compute service: The output should show all non-maintenance mode nodes as up. Make sure all undercloud services are running: 21.2. Replacing a Networker node In certain circumstances a Red Hat OpenStack Platform (RHOSP) node with a Networker profile in a high availability cluster might fail. Replacing a Networker node requires running the openstack overcloud deploy command to update the overcloud with a the new node. Prerequisites Your RHOSP deployment is highly available with three or more Networker nodes. The node that you add must be able to connect to the other nodes in the cluster over the network. You have performed the steps described in Section 21.1, "Preparing to replace network nodes" Procedure Log in to your undercloud as the stack user. Source the undercloud credentials file: Example Identify the index of the node to remove: Sample output Set the node into maintenance mode by using the baremetal node maintenance set command. Example Create a JSON file to add the new node to the node pool that contains RHOSP director. Example For more information, see Adding nodes to the overcloud in the Director Installation and Usage guide. Run the openstack overcloud node import command to register the new node. Example After registering the new node, launch the introspection process by using the following commands: Tag the new node with the Networker profile by using the openstack baremetal node set command. Example Create a ~/templates/remove-networker.yaml environment file that defines the index of the node that you intend to remove: Example Create a ~/templates/node-count-networker.yaml environment file and set the total count of Networker nodes in the file. Example Run the openstack overcloud deploy command and include the core heat templates, environment files, and the environment files that you modified. Important The order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence. RHOSP director removes the old Networker node, creates a new one, and updates the overcloud stack. Verification Check the status of the overcloud stack: Verify that the new Networker node is listed, and the old one is removed. Sample output Additional resources Adding nodes to the overcloud in the Director Installation and Usage guide Registering nodes for the overcloud in the Director Installation and Usage guide baremetal node manage in the Command Line Interface Reference overcloud node introspect in the Command Line Interface Reference Environment files in the Advanced Overcloud Customization guide Including environment files in overcloud creation in the Advanced Overcloud Customization guide 21.3. Rescheduling nodes and cleaning up the Networking service As a part of replacing a Red Hat OpenStack Platform (RHOSP) Networker node, remove all Networking service agents on the removed node from the database. Doing so ensures that the Networking service does not identify the agents as out-of-service ("dead"). For ML2/OVS users, removing agents from the removed node enables the DHCP resources to be automatically rescheduled to other Networker nodes. Prerequisites Your RHOSP deployment is highly available with three or more Networker nodes. Procedure Log in to your undercloud as the stack user. Source the overcloud credentials file: Example Verify that the RHOSP Networking service processes exist, and are marked out-of-service ( xxx ) for the overcloud-networker-1 . Sample output for ML2/OVN Sample output for ML2/OVS Capture the UUIDs of the agents registered for overcloud-networker-1 . Delete any remaining overcloud-networker-1 agents from the database. Sample output Additional resources network agent list in the Command Line Interface Reference | [
"source ~/stackrc",
"openstack stack list --nested",
"pcs status",
"* Container bundle set: galera-bundle [cluster.common.tag/rhosp16-openstack-mariadb:pcmklatest]: * galera-bundle-0 (ocf::heartbeat:galera): Master controller-0 * galera-bundle-1 (ocf::heartbeat:galera): Master controller-1 * galera-bundle-2 (ocf::heartbeat:galera): Master controller-2",
"sudo systemctl status tripleo_nova_compute openstack baremetal node list",
"sudo systemctl -t service",
"source ~/stackrc",
"openstack baremetal node list -c UUID -c Name -c \"Instance UUID\"",
"+--------------------------------------+------+--------------------------------------+ | UUID | Name | Instance UUID | +--------------------------------------+------+--------------------------------------+ | 36404147-7c8a-41e6-8c72-6af1e339da2a | None | 7bee57cf-4a58-4eaf-b851-f3203f6e5e05 | | 91eb9ac5-7d52-453c-a017-0f2fb289c3cd | None | None | | 75b25e9a-948d-424a-9b3b-0f2fb289c3cd | None | None | | 038727da-6a5c-425f-bd45-16aa2bc4ba91 | None | 763bfec2-9354-466a-ae65-1fdf45d35c61 | | dc2292e6-4056-46e0-8848-165d06fcc948 | None | 2017b481-706f-44e1-852a-57fb03ecef11 | | c7eadcea-e377-4392-9fc3-716f1bd57527 | None | 5f73c7d7-4826-49a5-b6be-0a95c6bdd2f8 | | da3a8d19-8a59-4e9d-923a-29254d688f6d | None | cfefaf60-8311-4bc3-9416-46852e2cb83f | | 807cb6ce-6b94-4cd1-9969-d390650854c7 | None | c07c13e6-a845-4791-9628-c8514585fe27 | | 0c245daa-7817-4ae9-a883-fed2e9c68d6c | None | 844c9a88-713a-4ff1-8737-30858d724593 | | e6499ef7-3db2-4ab4-bfa7-feb44c6591c6 | None | aef7c27a-f0b4-4814-b0ff-d3f792321212 | | 7545385c-bc49-4eb9-b13c-201368ce1c62 | None | c2e40164-c659-4849-a28f-a7b270ed2970 | +--------------------------------------+------+--------------------------------------+",
"openstack baremetal node maintenance set e6499ef7-3db2-4ab4-bfa7-ef59539bf972",
"{ \"nodes\":[ { \"mac\":[ \"dd:dd:dd:dd:dd:dd\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.168.24.207\" } ] }",
"openstack overcloud node import newnode.json",
"openstack baremetal node manage <node> openstack overcloud node introspect <node> --provide",
"openstack baremetal node set --property capabilities='profile:networker,boot_option:local' 91eb9ac5-7d52-453c-a017-c0e3d823efd0",
"parameters: NetworkerRemovalPolicies: [{'resource_list': ['1']}]",
"parameter_defaults: OvercloudNetworkerFlavor: networker NetworkerCount: 3",
"openstack overcloud deploy --templates -e <your_environment_files> -e /home/stack/templates/node-count-networker.yaml -e /home/stack/templates/remove-networker.yaml",
"openstack stack list --nested",
"openstack server list -c ID -c Name -c Status",
"+--------------------------------------+------------------------+--------+ | ID | Name | Status | +--------------------------------------+------------------------+--------+ | 861408be-4027-4f53-87a6-cd3cf206ba7a | overcloud-compute-0 | ACTIVE | | 0966e9ae-f553-447a-9929-c4232432f718 | overcloud-compute-1 | ACTIVE | | 9c08fa65-b38c-4b2e-bd47-33870bff06c7 | overcloud-compute-2 | ACTIVE | | a7f0f5e1-e7ce-4513-ad2b-81146bc8c5af | overcloud-controller-0 | ACTIVE | | cfefaf60-8311-4bc3-9416-6a824a40a9ae | overcloud-controller-1 | ACTIVE | | 97a055d4-aefd-481c-82b7-4a5f384036d2 | overcloud-controller-2 | ACTIVE | | 844c9a88-713a-4ff1-8737-6410bf551d4f | overcloud-networker-0 | ACTIVE | | c2e40164-c659-4849-a28f-507eb7edb79f | overcloud-networker-2 | ACTIVE | | 425a0828-b42f-43b0-940c-7fb02522753a | overcloud-networker-3 | ACTIVE | +--------------------------------------+------------------------+--------+",
"source ~/overcloudrc",
"openstack network agent list -c ID -c Binary -c Host -c Alive | grep overcloud-networker-1",
"+--------------------------------------+-----------------------+-------+-------------------------------+ | ID | Host | Alive | Binary | +--------------------------------------+-----------------------+-------+-------------------------------+ | 26316f47-4a30-4baf-ba00-d33c9a9e0844 | overcloud-networker-1 | xxx | ovn-controller | +--------------------------------------+-----------------------+-------+-------------------------------+",
"+--------------------------------------+-----------------------+-------+------------------------+ | ID | Host | Alive | Binary | +--------------------------------------+-----------------------+-------+------------------------+ | 8377-66d75323e466c-b838-1149e10441ee | overcloud-networker-1 | xxx | neutron-metadata-agent | | b55d-797668c336707-a2cf-cba875eeda21 | overcloud-networker-1 | xxx | neutron-l3-agent | | 9dcb-00a9e32ecde42-9458-01cfa9742862 | overcloud-networker-1 | xxx | neutron-ovs-agent | | be83-e4d9329846540-9ae6-1540947b2ffd | overcloud-networker-1 | xxx | neutron-dhcp-agent | +--------------------------------------+-----------------------+-------+------------------------+",
"AGENT_UUIDS=USD(openstack network agent list -c ID -c Host -c Alive -c Binary -f value | grep overcloud-networker-1 | cut -d\\ -f1)",
"for agent in USDAGENT_UUIDS; do neutron agent-delete USDagent ; done",
"Deleted agent(s): 26316f47-4a30-4baf-ba00-d33c9a9e0844"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/networking_guide/replace-network-nodes_rhosp-network |
Chapter 22. OpenShift SDN default CNI network provider | Chapter 22. OpenShift SDN default CNI network provider 22.1. About the OpenShift SDN default CNI network provider OpenShift Container Platform uses a software-defined networking (SDN) approach to provide a unified cluster network that enables communication between pods across the OpenShift Container Platform cluster. This pod network is established and maintained by the OpenShift SDN, which configures an overlay network using Open vSwitch (OVS). 22.1.1. OpenShift SDN network isolation modes OpenShift SDN provides three SDN modes for configuring the pod network: Network policy mode allows project administrators to configure their own isolation policies using NetworkPolicy objects. Network policy is the default mode in OpenShift Container Platform 4.11. Multitenant mode provides project-level isolation for pods and services. Pods from different projects cannot send packets to or receive packets from pods and services of a different project. You can disable isolation for a project, allowing it to send network traffic to all pods and services in the entire cluster and receive network traffic from those pods and services. Subnet mode provides a flat pod network where every pod can communicate with every other pod and service. The network policy mode provides the same functionality as subnet mode. 22.1.2. Supported default CNI network provider feature matrix OpenShift Container Platform offers two supported choices, OpenShift SDN and OVN-Kubernetes, for the default Container Network Interface (CNI) network provider. The following table summarizes the current feature support for both network providers: Table 22.1. Default CNI network provider feature comparison Feature OpenShift SDN OVN-Kubernetes Egress IPs Supported Supported Egress firewall [1] Supported Supported Egress router Supported Supported [2] Hybrid networking Not supported Supported IPsec encryption Not supported Supported IPv6 Not supported Supported [3] [4] Kubernetes network policy Supported Supported Kubernetes network policy logs Not supported Supported Multicast Supported Supported Hardware offloading Not supported Supported Egress firewall is also known as egress network policy in OpenShift SDN. This is not the same as network policy egress. Egress router for OVN-Kubernetes supports only redirect mode. IPv6 is supported only on bare metal clusters. IPv6 single stack does not support Kubernetes NMState . 22.2. Configuring egress IPs for a project As a cluster administrator, you can configure the OpenShift SDN Container Network Interface (CNI) cluster network provider to assign one or more egress IP addresses to a project. 22.2.1. Egress IP address architectural design and implementation The OpenShift Container Platform egress IP address functionality allows you to ensure that the traffic from one or more pods in one or more namespaces has a consistent source IP address for services outside the cluster network. For example, you might have a pod that periodically queries a database that is hosted on a server outside of your cluster. To enforce access requirements for the server, a packet filtering device is configured to allow traffic only from specific IP addresses. To ensure that you can reliably allow access to the server from only that specific pod, you can configure a specific egress IP address for the pod that makes the requests to the server. An egress IP address assigned to a namespace is different from an egress router, which is used to send traffic to specific destinations. In some cluster configurations, application pods and ingress router pods run on the same node. If you configure an egress IP address for an application project in this scenario, the IP address is not used when you send a request to a route from the application project. An egress IP address is implemented as an additional IP address on the primary network interface of a node and must be in the same subnet as the primary IP address of the node. The additional IP address must not be assigned to any other node in the cluster. Important Egress IP addresses must not be configured in any Linux network configuration files, such as ifcfg-eth0 . 22.2.1.1. Platform support Support for the egress IP address functionality on various platforms is summarized in the following table: Platform Supported Bare metal Yes VMware vSphere Yes Red Hat OpenStack Platform (RHOSP) No Amazon Web Services (AWS) Yes Google Cloud Platform (GCP) Yes Microsoft Azure Yes Important The assignment of egress IP addresses to control plane nodes with the EgressIP feature is not supported on a cluster provisioned on Amazon Web Services (AWS). ( BZ#2039656 ) 22.2.1.2. Public cloud platform considerations For clusters provisioned on public cloud infrastructure, there is a constraint on the absolute number of assignable IP addresses per node. The maximum number of assignable IP addresses per node, or the IP capacity , can be described in the following formula: IP capacity = public cloud default capacity - sum(current IP assignments) While the Egress IPs capability manages the IP address capacity per node, it is important to plan for this constraint in your deployments. For example, for a cluster installed on bare-metal infrastructure with 8 nodes you can configure 150 egress IP addresses. However, if a public cloud provider limits IP address capacity to 10 IP addresses per node, the total number of assignable IP addresses is only 80. To achieve the same IP address capacity in this example cloud provider, you would need to allocate 7 additional nodes. To confirm the IP capacity and subnets for any node in your public cloud environment, you can enter the oc get node <node_name> -o yaml command. The cloud.network.openshift.io/egress-ipconfig annotation includes capacity and subnet information for the node. The annotation value is an array with a single object with fields that provide the following information for the primary network interface: interface : Specifies the interface ID on AWS and Azure and the interface name on GCP. ifaddr : Specifies the subnet mask for one or both IP address families. capacity : Specifies the IP address capacity for the node. On AWS, the IP address capacity is provided per IP address family. On Azure and GCP, the IP address capacity includes both IPv4 and IPv6 addresses. The following examples illustrate the annotation from nodes on several public cloud providers. The annotations are indented for readability. Example cloud.network.openshift.io/egress-ipconfig annotation on AWS cloud.network.openshift.io/egress-ipconfig: [ { "interface":"eni-078d267045138e436", "ifaddr":{"ipv4":"10.0.128.0/18"}, "capacity":{"ipv4":14,"ipv6":15} } ] Example cloud.network.openshift.io/egress-ipconfig annotation on GCP cloud.network.openshift.io/egress-ipconfig: [ { "interface":"nic0", "ifaddr":{"ipv4":"10.0.128.0/18"}, "capacity":{"ip":14} } ] The following sections describe the IP address capacity for supported public cloud environments for use in your capacity calculation. 22.2.1.2.1. Amazon Web Services (AWS) IP address capacity limits On AWS, constraints on IP address assignments depend on the instance type configured. For more information, see IP addresses per network interface per instance type 22.2.1.2.2. Google Cloud Platform (GCP) IP address capacity limits On GCP, the networking model implements additional node IP addresses through IP address aliasing, rather than IP address assignments. However, IP address capacity maps directly to IP aliasing capacity. The following capacity limits exist for IP aliasing assignment: Per node, the maximum number of IP aliases, both IPv4 and IPv6, is 10. Per VPC, the maximum number of IP aliases is unspecified, but OpenShift Container Platform scalability testing reveals the maximum to be approximately 15,000. For more information, see Per instance quotas and Alias IP ranges overview . 22.2.1.2.3. Microsoft Azure IP address capacity limits On Azure, the following capacity limits exist for IP address assignment: Per NIC, the maximum number of assignable IP addresses, for both IPv4 and IPv6, is 256. Per virtual network, the maximum number of assigned IP addresses cannot exceed 65,536. For more information, see Networking limits . 22.2.1.3. Limitations The following limitations apply when using egress IP addresses with the OpenShift SDN cluster network provider: You cannot use manually assigned and automatically assigned egress IP addresses on the same nodes. If you manually assign egress IP addresses from an IP address range, you must not make that range available for automatic IP assignment. You cannot share egress IP addresses across multiple namespaces using the OpenShift SDN egress IP address implementation. If you need to share IP addresses across namespaces, the OVN-Kubernetes cluster network provider egress IP address implementation allows you to span IP addresses across multiple namespaces. Note If you use OpenShift SDN in multitenant mode, you cannot use egress IP addresses with any namespace that is joined to another namespace by the projects that are associated with them. For example, if project1 and project2 are joined by running the oc adm pod-network join-projects --to=project1 project2 command, neither project can use an egress IP address. For more information, see BZ#1645577 . 22.2.1.4. IP address assignment approaches You can assign egress IP addresses to namespaces by setting the egressIPs parameter of the NetNamespace object. After an egress IP address is associated with a project, OpenShift SDN allows you to assign egress IP addresses to hosts in two ways: In the automatically assigned approach, an egress IP address range is assigned to a node. In the manually assigned approach, a list of one or more egress IP address is assigned to a node. Namespaces that request an egress IP address are matched with nodes that can host those egress IP addresses, and then the egress IP addresses are assigned to those nodes. If the egressIPs parameter is set on a NetNamespace object, but no node hosts that egress IP address, then egress traffic from the namespace will be dropped. High availability of nodes is automatic. If a node that hosts an egress IP address is unreachable and there are nodes that are able to host that egress IP address, then the egress IP address will move to a new node. When the unreachable node comes back online, the egress IP address automatically moves to balance egress IP addresses across nodes. 22.2.1.4.1. Considerations when using automatically assigned egress IP addresses When using the automatic assignment approach for egress IP addresses the following considerations apply: You set the egressCIDRs parameter of each node's HostSubnet resource to indicate the range of egress IP addresses that can be hosted by a node. OpenShift Container Platform sets the egressIPs parameter of the HostSubnet resource based on the IP address range you specify. If the node hosting the namespace's egress IP address is unreachable, OpenShift Container Platform will reassign the egress IP address to another node with a compatible egress IP address range. The automatic assignment approach works best for clusters installed in environments with flexibility in associating additional IP addresses with nodes. 22.2.1.4.2. Considerations when using manually assigned egress IP addresses This approach allows you to control which nodes can host an egress IP address. Note If your cluster is installed on public cloud infrastructure, you must ensure that each node that you assign egress IP addresses to has sufficient spare capacity to host the IP addresses. For more information, see "Platform considerations" in a section. When using the manual assignment approach for egress IP addresses the following considerations apply: You set the egressIPs parameter of each node's HostSubnet resource to indicate the IP addresses that can be hosted by a node. Multiple egress IP addresses per namespace are supported. If a namespace has multiple egress IP addresses and those addresses are hosted on multiple nodes, the following additional considerations apply: If a pod is on a node that is hosting an egress IP address, that pod always uses the egress IP address on the node. If a pod is not on a node that is hosting an egress IP address, that pod uses an egress IP address at random. 22.2.2. Configuring automatically assigned egress IP addresses for a namespace In OpenShift Container Platform you can enable automatic assignment of an egress IP address for a specific namespace across one or more nodes. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Update the NetNamespace object with the egress IP address using the following JSON: USD oc patch netnamespace <project_name> --type=merge -p \ '{ "egressIPs": [ "<ip_address>" ] }' where: <project_name> Specifies the name of the project. <ip_address> Specifies one or more egress IP addresses for the egressIPs array. For example, to assign project1 to an IP address of 192.168.1.100 and project2 to an IP address of 192.168.1.101: USD oc patch netnamespace project1 --type=merge -p \ '{"egressIPs": ["192.168.1.100"]}' USD oc patch netnamespace project2 --type=merge -p \ '{"egressIPs": ["192.168.1.101"]}' Note Because OpenShift SDN manages the NetNamespace object, you can make changes only by modifying the existing NetNamespace object. Do not create a new NetNamespace object. Indicate which nodes can host egress IP addresses by setting the egressCIDRs parameter for each host using the following JSON: USD oc patch hostsubnet <node_name> --type=merge -p \ '{ "egressCIDRs": [ "<ip_address_range>", "<ip_address_range>" ] }' where: <node_name> Specifies a node name. <ip_address_range> Specifies an IP address range in CIDR format. You can specify more than one address range for the egressCIDRs array. For example, to set node1 and node2 to host egress IP addresses in the range 192.168.1.0 to 192.168.1.255: USD oc patch hostsubnet node1 --type=merge -p \ '{"egressCIDRs": ["192.168.1.0/24"]}' USD oc patch hostsubnet node2 --type=merge -p \ '{"egressCIDRs": ["192.168.1.0/24"]}' OpenShift Container Platform automatically assigns specific egress IP addresses to available nodes in a balanced way. In this case, it assigns the egress IP address 192.168.1.100 to node1 and the egress IP address 192.168.1.101 to node2 or vice versa. 22.2.3. Configuring manually assigned egress IP addresses for a namespace In OpenShift Container Platform you can associate one or more egress IP addresses with a namespace. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Update the NetNamespace object by specifying the following JSON object with the desired IP addresses: USD oc patch netnamespace <project_name> --type=merge -p \ '{ "egressIPs": [ "<ip_address>" ] }' where: <project_name> Specifies the name of the project. <ip_address> Specifies one or more egress IP addresses for the egressIPs array. For example, to assign the project1 project to the IP addresses 192.168.1.100 and 192.168.1.101 : USD oc patch netnamespace project1 --type=merge \ -p '{"egressIPs": ["192.168.1.100","192.168.1.101"]}' To provide high availability, set the egressIPs value to two or more IP addresses on different nodes. If multiple egress IP addresses are set, then pods use all egress IP addresses roughly equally. Note Because OpenShift SDN manages the NetNamespace object, you can make changes only by modifying the existing NetNamespace object. Do not create a new NetNamespace object. Manually assign the egress IP address to the node hosts. If your cluster is installed on public cloud infrastructure, you must confirm that the node has available IP address capacity. Set the egressIPs parameter on the HostSubnet object on the node host. Using the following JSON, include as many IP addresses as you want to assign to that node host: USD oc patch hostsubnet <node_name> --type=merge -p \ '{ "egressIPs": [ "<ip_address>", "<ip_address>" ] }' where: <node_name> Specifies a node name. <ip_address> Specifies an IP address. You can specify more than one IP address for the egressIPs array. For example, to specify that node1 should have the egress IPs 192.168.1.100 , 192.168.1.101 , and 192.168.1.102 : USD oc patch hostsubnet node1 --type=merge -p \ '{"egressIPs": ["192.168.1.100", "192.168.1.101", "192.168.1.102"]}' In the example, all egress traffic for project1 will be routed to the node hosting the specified egress IP, and then connected through Network Address Translation (NAT) to that IP address. 22.2.4. Additional resources If you are configuring manual egress IP address assignment, see Platform considerations for information about IP capacity planning. 22.3. Configuring an egress firewall for a project As a cluster administrator, you can create an egress firewall for a project that restricts egress traffic leaving your OpenShift Container Platform cluster. 22.3.1. How an egress firewall works in a project As a cluster administrator, you can use an egress firewall to limit the external hosts that some or all pods can access from within the cluster. An egress firewall supports the following scenarios: A pod can only connect to internal hosts and cannot initiate connections to the public internet. A pod can only connect to the public internet and cannot initiate connections to internal hosts that are outside the OpenShift Container Platform cluster. A pod cannot reach specified internal subnets or hosts outside the OpenShift Container Platform cluster. A pod can connect to only specific external hosts. For example, you can allow one project access to a specified IP range but deny the same access to a different project. Or you can restrict application developers from updating from Python pip mirrors, and force updates to come only from approved sources. Note Egress firewall does not apply to the host network namespace. Pods with host networking enabled are unaffected by egress firewall rules. You configure an egress firewall policy by creating an EgressNetworkPolicy custom resource (CR) object. The egress firewall matches network traffic that meets any of the following criteria: An IP address range in CIDR format A DNS name that resolves to an IP address Important If your egress firewall includes a deny rule for 0.0.0.0/0 , access to your OpenShift Container Platform API servers is blocked. To ensure that pods can access the OpenShift Container Platform API servers, you must include the built-in join network 100.64.0.0/16 of Open Virtual Network (OVN) to allow access when using node ports together with an EgressFirewall. You must also include the IP address range that the API servers listen on in your egress firewall rules, as in the following example: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow # ... - to: cidrSelector: 0.0.0.0/0 3 type: Deny 1 The namespace for the egress firewall. 2 The IP address range that includes your OpenShift Container Platform API servers. 3 A global deny rule prevents access to the OpenShift Container Platform API servers. To find the IP address for your API servers, run oc get ep kubernetes -n default . For more information, see BZ#1988324 . Important You must have OpenShift SDN configured to use either the network policy or multitenant mode to configure an egress firewall. If you use network policy mode, an egress firewall is compatible with only one policy per namespace and will not work with projects that share a network, such as global projects. Warning Egress firewall rules do not apply to traffic that goes through routers. Any user with permission to create a Route CR object can bypass egress firewall policy rules by creating a route that points to a forbidden destination. 22.3.1.1. Limitations of an egress firewall An egress firewall has the following limitations: No project can have more than one EgressNetworkPolicy object. Important The creation of more than one EgressNetworkPolicy object is allowed, however it should not be done. When you create more than one EgressNetworkPolicy object, the following message is returned: dropping all rules . In actuality, all external traffic is dropped, which can cause security risks for your organization. A maximum of one EgressNetworkPolicy object with a maximum of 1,000 rules can be defined per project. The default project cannot use an egress firewall. When using the OpenShift SDN default Container Network Interface (CNI) network provider in multitenant mode, the following limitations apply: Global projects cannot use an egress firewall. You can make a project global by using the oc adm pod-network make-projects-global command. Projects merged by using the oc adm pod-network join-projects command cannot use an egress firewall in any of the joined projects. Violating any of these restrictions results in a broken egress firewall for the project. Consequently, all external network traffic is dropped, which can cause security risks for your organization. An Egress Firewall resource can be created in the kube-node-lease , kube-public , kube-system , openshift and openshift- projects. 22.3.1.2. Matching order for egress firewall policy rules The egress firewall policy rules are evaluated in the order that they are defined, from first to last. The first rule that matches an egress connection from a pod applies. Any subsequent rules are ignored for that connection. 22.3.1.3. How Domain Name Server (DNS) resolution works If you use DNS names in any of your egress firewall policy rules, proper resolution of the domain names is subject to the following restrictions: Domain name updates are polled based on a time-to-live (TTL) duration. By default, the duration is 30 seconds. When the egress firewall controller queries the local name servers for a domain name, if the response includes a TTL that is less than 30 seconds, the controller sets the duration to the returned value. If the TTL in the response is greater than 30 minutes, the controller sets the duration to 30 minutes. If the TTL is between 30 seconds and 30 minutes, the controller ignores the value and sets the duration to 30 seconds. The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, the egress firewall might not be enforced consistently. Because the egress firewall controller and pods asynchronously poll the same local name server, the pod might obtain the updated IP address before the egress controller does, which causes a race condition. Due to this current limitation, domain name usage in EgressNetworkPolicy objects is only recommended for domains with infrequent IP address changes. Note The egress firewall always allows pods access to the external interface of the node that the pod is on for DNS resolution. If you use domain names in your egress firewall policy and your DNS resolution is not handled by a DNS server on the local node, then you must add egress firewall rules that allow access to your DNS server's IP addresses. if you are using domain names in your pods. 22.3.2. EgressNetworkPolicy custom resource (CR) object You can define one or more rules for an egress firewall. A rule is either an Allow rule or a Deny rule, with a specification for the traffic that the rule applies to. The following YAML describes an EgressNetworkPolicy CR object: EgressNetworkPolicy object apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: <name> 1 spec: egress: 2 ... 1 A name for your egress firewall policy. 2 A collection of one or more egress network policy rules as described in the following section. 22.3.2.1. EgressNetworkPolicy rules The following YAML describes an egress firewall rule object. The egress stanza expects an array of one or more objects. Egress policy rule stanza egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4 1 The type of rule. The value must be either Allow or Deny . 2 A stanza describing an egress traffic match rule. A value for either the cidrSelector field or the dnsName field for the rule. You cannot use both fields in the same rule. 3 An IP address range in CIDR format. 4 A domain name. 22.3.2.2. Example EgressNetworkPolicy CR objects The following example defines several egress firewall policy rules: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Allow to: dnsName: www.example.com - type: Deny to: cidrSelector: 0.0.0.0/0 1 A collection of egress firewall policy rule objects. 22.3.3. Creating an egress firewall policy object As a cluster administrator, you can create an egress firewall policy object for a project. Important If the project already has an EgressNetworkPolicy object defined, you must edit the existing policy to make changes to the egress firewall rules. Prerequisites A cluster that uses the OpenShift SDN default Container Network Interface (CNI) network provider plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Create a policy rule: Create a <policy_name>.yaml file where <policy_name> describes the egress policy rules. In the file you created, define an egress policy object. Enter the following command to create the policy object. Replace <policy_name> with the name of the policy and <project> with the project that the rule applies to. USD oc create -f <policy_name>.yaml -n <project> In the following example, a new EgressNetworkPolicy object is created in a project named project1 : USD oc create -f default.yaml -n project1 Example output egressnetworkpolicy.network.openshift.io/v1 created Optional: Save the <policy_name>.yaml file so that you can make changes later. 22.4. Editing an egress firewall for a project As a cluster administrator, you can modify network traffic rules for an existing egress firewall. 22.4.1. Viewing an EgressNetworkPolicy object You can view an EgressNetworkPolicy object in your cluster. Prerequisites A cluster using the OpenShift SDN default Container Network Interface (CNI) network provider plugin. Install the OpenShift Command-line Interface (CLI), commonly known as oc . You must log in to the cluster. Procedure Optional: To view the names of the EgressNetworkPolicy objects defined in your cluster, enter the following command: USD oc get egressnetworkpolicy --all-namespaces To inspect a policy, enter the following command. Replace <policy_name> with the name of the policy to inspect. USD oc describe egressnetworkpolicy <policy_name> Example output Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0 22.5. Editing an egress firewall for a project As a cluster administrator, you can modify network traffic rules for an existing egress firewall. 22.5.1. Editing an EgressNetworkPolicy object As a cluster administrator, you can update the egress firewall for a project. Prerequisites A cluster using the OpenShift SDN default Container Network Interface (CNI) network provider plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Find the name of the EgressNetworkPolicy object for the project. Replace <project> with the name of the project. USD oc get -n <project> egressnetworkpolicy Optional: If you did not save a copy of the EgressNetworkPolicy object when you created the egress network firewall, enter the following command to create a copy. USD oc get -n <project> egressnetworkpolicy <name> -o yaml > <filename>.yaml Replace <project> with the name of the project. Replace <name> with the name of the object. Replace <filename> with the name of the file to save the YAML to. After making changes to the policy rules, enter the following command to replace the EgressNetworkPolicy object. Replace <filename> with the name of the file containing the updated EgressNetworkPolicy object. USD oc replace -f <filename>.yaml 22.6. Removing an egress firewall from a project As a cluster administrator, you can remove an egress firewall from a project to remove all restrictions on network traffic from the project that leaves the OpenShift Container Platform cluster. 22.6.1. Removing an EgressNetworkPolicy object As a cluster administrator, you can remove an egress firewall from a project. Prerequisites A cluster using the OpenShift SDN default Container Network Interface (CNI) network provider plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Find the name of the EgressNetworkPolicy object for the project. Replace <project> with the name of the project. USD oc get -n <project> egressnetworkpolicy Enter the following command to delete the EgressNetworkPolicy object. Replace <project> with the name of the project and <name> with the name of the object. USD oc delete -n <project> egressnetworkpolicy <name> 22.7. Considerations for the use of an egress router pod 22.7.1. About an egress router pod The OpenShift Container Platform egress router pod redirects traffic to a specified remote server from a private source IP address that is not used for any other purpose. An egress router pod can send network traffic to servers that are set up to allow access only from specific IP addresses. Note The egress router pod is not intended for every outgoing connection. Creating large numbers of egress router pods can exceed the limits of your network hardware. For example, creating an egress router pod for every project or application could exceed the number of local MAC addresses that the network interface can handle before reverting to filtering MAC addresses in software. Important The egress router image is not compatible with Amazon AWS, Azure Cloud, or any other cloud platform that does not support layer 2 manipulations due to their incompatibility with macvlan traffic. 22.7.1.1. Egress router modes In redirect mode , an egress router pod configures iptables rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be configured to access the service for the egress router rather than connecting directly to the destination IP. You can access the destination service and port from the application pod by using the curl command. For example: USD curl <router_service_IP> <port> In HTTP proxy mode , an egress router pod runs as an HTTP proxy on port 8080 . This mode only works for clients that are connecting to HTTP-based or HTTPS-based services, but usually requires fewer changes to the client pods to get them to work. Many programs can be told to use an HTTP proxy by setting an environment variable. In DNS proxy mode , an egress router pod runs as a DNS proxy for TCP-based services from its own IP address to one or more destination IP addresses. To make use of the reserved, source IP address, client pods must be modified to connect to the egress router pod rather than connecting directly to the destination IP address. This modification ensures that external destinations treat traffic as though it were coming from a known source. Redirect mode works for all services except for HTTP and HTTPS. For HTTP and HTTPS services, use HTTP proxy mode. For TCP-based services with IP addresses or domain names, use DNS proxy mode. 22.7.1.2. Egress router pod implementation The egress router pod setup is performed by an initialization container. That container runs in a privileged context so that it can configure the macvlan interface and set up iptables rules. After the initialization container finishes setting up the iptables rules, it exits. the egress router pod executes the container to handle the egress router traffic. The image used varies depending on the egress router mode. The environment variables determine which addresses the egress-router image uses. The image configures the macvlan interface to use EGRESS_SOURCE as its IP address, with EGRESS_GATEWAY as the IP address for the gateway. Network Address Translation (NAT) rules are set up so that connections to the cluster IP address of the pod on any TCP or UDP port are redirected to the same port on IP address specified by the EGRESS_DESTINATION variable. If only some of the nodes in your cluster are capable of claiming the specified source IP address and using the specified gateway, you can specify a nodeName or nodeSelector to identify which nodes are acceptable. 22.7.1.3. Deployment considerations An egress router pod adds an additional IP address and MAC address to the primary network interface of the node. As a result, you might need to configure your hypervisor or cloud provider to allow the additional address. Red Hat OpenStack Platform (RHOSP) If you deploy OpenShift Container Platform on RHOSP, you must allow traffic from the IP and MAC addresses of the egress router pod on your OpenStack environment. If you do not allow the traffic, then communication will fail : USD openstack port set --allowed-address \ ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid> Red Hat Virtualization (RHV) If you are using RHV , you must select No Network Filter for the Virtual network interface controller (vNIC). VMware vSphere If you are using VMware vSphere, see the VMware documentation for securing vSphere standard switches . View and change VMware vSphere default settings by selecting the host virtual switch from the vSphere Web Client. Specifically, ensure that the following are enabled: MAC Address Changes Forged Transits Promiscuous Mode Operation 22.7.1.4. Failover configuration To avoid downtime, you can deploy an egress router pod with a Deployment resource, as in the following example. To create a new Service object for the example deployment, use the oc expose deployment/egress-demo-controller command. apiVersion: apps/v1 kind: Deployment metadata: name: egress-demo-controller spec: replicas: 1 1 selector: matchLabels: name: egress-router template: metadata: name: egress-router labels: name: egress-router annotations: pod.network.openshift.io/assign-macvlan: "true" spec: 2 initContainers: ... containers: ... 1 Ensure that replicas is set to 1 , because only one pod can use a given egress source IP address at any time. This means that only a single copy of the router runs on a node. 2 Specify the Pod object template for the egress router pod. 22.7.2. Additional resources Deploying an egress router in redirection mode Deploying an egress router in HTTP proxy mode Deploying an egress router in DNS proxy mode 22.8. Deploying an egress router pod in redirect mode As a cluster administrator, you can deploy an egress router pod that is configured to redirect traffic to specified destination IP addresses. 22.8.1. Egress router pod specification for redirect mode Define the configuration for an egress router pod in the Pod object. The following YAML describes the fields for the configuration of an egress router pod in redirect mode: apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress_router> - name: EGRESS_GATEWAY 3 value: <egress_gateway> - name: EGRESS_DESTINATION 4 value: <egress_destination> - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod 1 The annotation tells OpenShift Container Platform to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod's network namespace. You must include the quotation marks around the "true" value. To have OpenShift Container Platform create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example, eth1 . 2 IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the /24 suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with the EGRESS_GATEWAY variable and no other hosts on the subnet. 3 Same value as the default gateway used by the node. 4 External server to direct traffic to. Using this example, connections to the pod are redirected to 203.0.113.25 , with a source IP address of 192.168.12.99 . Example egress router pod specification apiVersion: v1 kind: Pod metadata: name: egress-multi labels: name: egress-multi annotations: pod.network.openshift.io/assign-macvlan: "true" spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE value: 192.168.12.99/24 - name: EGRESS_GATEWAY value: 192.168.12.1 - name: EGRESS_DESTINATION value: | 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27 - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod 22.8.2. Egress destination configuration format When an egress router pod is deployed in redirect mode, you can specify redirection rules by using one or more of the following formats: <port> <protocol> <ip_address> - Incoming connections to the given <port> should be redirected to the same port on the given <ip_address> . <protocol> is either tcp or udp . <port> <protocol> <ip_address> <remote_port> - As above, except that the connection is redirected to a different <remote_port> on <ip_address> . <ip_address> - If the last line is a single IP address, then any connections on any other port will be redirected to the corresponding port on that IP address. If there is no fallback IP address then connections on other ports are rejected. In the example that follows several rules are defined: The first line redirects traffic from local port 80 to port 80 on 203.0.113.25 . The second and third lines redirect local ports 8080 and 8443 to remote ports 80 and 443 on 203.0.113.26 . The last line matches traffic for any ports not specified in the rules. Example configuration 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27 22.8.3. Deploying an egress router pod in redirect mode In redirect mode , an egress router pod sets up iptables rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be configured to access the service for the egress router rather than connecting directly to the destination IP. You can access the destination service and port from the application pod by using the curl command. For example: USD curl <router_service_IP> <port> Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an egress router pod. To ensure that other pods can find the IP address of the egress router pod, create a service to point to the egress router pod, as in the following example: apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http port: 80 - name: https port: 443 type: ClusterIP selector: name: egress-1 Your pods can now connect to this service. Their connections are redirected to the corresponding ports on the external server, using the reserved egress IP address. 22.8.4. Additional resources Configuring an egress router destination mappings with a ConfigMap 22.9. Deploying an egress router pod in HTTP proxy mode As a cluster administrator, you can deploy an egress router pod configured to proxy traffic to specified HTTP and HTTPS-based services. 22.9.1. Egress router pod specification for HTTP mode Define the configuration for an egress router pod in the Pod object. The following YAML describes the fields for the configuration of an egress router pod in HTTP mode: apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: http-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-http-proxy env: - name: EGRESS_HTTP_PROXY_DESTINATION 4 value: |- ... ... 1 The annotation tells OpenShift Container Platform to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod's network namespace. You must include the quotation marks around the "true" value. To have OpenShift Container Platform create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example, eth1 . 2 IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the /24 suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with the EGRESS_GATEWAY variable and no other hosts on the subnet. 3 Same value as the default gateway used by the node. 4 A string or YAML multi-line string specifying how to configure the proxy. Note that this is specified as an environment variable in the HTTP proxy container, not with the other environment variables in the init container. 22.9.2. Egress destination configuration format When an egress router pod is deployed in HTTP proxy mode, you can specify redirection rules by using one or more of the following formats. Each line in the configuration specifies one group of connections to allow or deny: An IP address allows connections to that IP address, such as 192.168.1.1 . A CIDR range allows connections to that CIDR range, such as 192.168.1.0/24 . A hostname allows proxying to that host, such as www.example.com . A domain name preceded by *. allows proxying to that domain and all of its subdomains, such as *.example.com . A ! followed by any of the match expressions denies the connection instead. If the last line is * , then anything that is not explicitly denied is allowed. Otherwise, anything that is not allowed is denied. You can also use * to allow connections to all remote destinations. Example configuration !*.example.com !192.168.1.0/24 192.168.2.1 * 22.9.3. Deploying an egress router pod in HTTP proxy mode In HTTP proxy mode , an egress router pod runs as an HTTP proxy on port 8080 . This mode only works for clients that are connecting to HTTP-based or HTTPS-based services, but usually requires fewer changes to the client pods to get them to work. Many programs can be told to use an HTTP proxy by setting an environment variable. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an egress router pod. To ensure that other pods can find the IP address of the egress router pod, create a service to point to the egress router pod, as in the following example: apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http-proxy port: 8080 1 type: ClusterIP selector: name: egress-1 1 Ensure the http port is set to 8080 . To configure the client pod (not the egress proxy pod) to use the HTTP proxy, set the http_proxy or https_proxy variables: apiVersion: v1 kind: Pod metadata: name: app-1 labels: name: app-1 spec: containers: env: - name: http_proxy value: http://egress-1:8080/ 1 - name: https_proxy value: http://egress-1:8080/ ... 1 The service created in the step. Note Using the http_proxy and https_proxy environment variables is not necessary for all setups. If the above does not create a working setup, then consult the documentation for the tool or software you are running in the pod. 22.9.4. Additional resources Configuring an egress router destination mappings with a ConfigMap 22.10. Deploying an egress router pod in DNS proxy mode As a cluster administrator, you can deploy an egress router pod configured to proxy traffic to specified DNS names and IP addresses. 22.10.1. Egress router pod specification for DNS mode Define the configuration for an egress router pod in the Pod object. The following YAML describes the fields for the configuration of an egress router pod in DNS mode: apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: dns-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-dns-proxy securityContext: privileged: true env: - name: EGRESS_DNS_PROXY_DESTINATION 4 value: |- ... - name: EGRESS_DNS_PROXY_DEBUG 5 value: "1" ... 1 The annotation tells OpenShift Container Platform to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod's network namespace. You must include the quotation marks around the "true" value. To have OpenShift Container Platform create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example, eth1 . 2 IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the /24 suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with the EGRESS_GATEWAY variable and no other hosts on the subnet. 3 Same value as the default gateway used by the node. 4 Specify a list of one or more proxy destinations. 5 Optional: Specify to output the DNS proxy log output to stdout . 22.10.2. Egress destination configuration format When the router is deployed in DNS proxy mode, you specify a list of port and destination mappings. A destination may be either an IP address or a DNS name. An egress router pod supports the following formats for specifying port and destination mappings: Port and remote address You can specify a source port and a destination host by using the two field format: <port> <remote_address> . The host can be an IP address or a DNS name. If a DNS name is provided, DNS resolution occurs at runtime. For a given host, the proxy connects to the specified source port on the destination host when connecting to the destination host IP address. Port and remote address pair example 80 172.16.12.11 100 example.com Port, remote address, and remote port You can specify a source port, a destination host, and a destination port by using the three field format: <port> <remote_address> <remote_port> . The three field format behaves identically to the two field version, with the exception that the destination port can be different than the source port. Port, remote address, and remote port example 8080 192.168.60.252 80 8443 web.example.com 443 22.10.3. Deploying an egress router pod in DNS proxy mode In DNS proxy mode , an egress router pod acts as a DNS proxy for TCP-based services from its own IP address to one or more destination IP addresses. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an egress router pod. Create a service for the egress router pod: Create a file named egress-router-service.yaml that contains the following YAML. Set spec.ports to the list of ports that you defined previously for the EGRESS_DNS_PROXY_DESTINATION environment variable. apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: ... type: ClusterIP selector: name: egress-dns-proxy For example: apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: - name: con1 protocol: TCP port: 80 targetPort: 80 - name: con2 protocol: TCP port: 100 targetPort: 100 type: ClusterIP selector: name: egress-dns-proxy To create the service, enter the following command: USD oc create -f egress-router-service.yaml Pods can now connect to this service. The connections are proxied to the corresponding ports on the external server, using the reserved egress IP address. 22.10.4. Additional resources Configuring an egress router destination mappings with a ConfigMap 22.11. Configuring an egress router pod destination list from a config map As a cluster administrator, you can define a ConfigMap object that specifies destination mappings for an egress router pod. The specific format of the configuration depends on the type of egress router pod. For details on the format, refer to the documentation for the specific egress router pod. 22.11.1. Configuring an egress router destination mappings with a config map For a large or frequently-changing set of destination mappings, you can use a config map to externally maintain the list. An advantage of this approach is that permission to edit the config map can be delegated to users without cluster-admin privileges. Because the egress router pod requires a privileged container, it is not possible for users without cluster-admin privileges to edit the pod definition directly. Note The egress router pod does not automatically update when the config map changes. You must restart the egress router pod to get updates. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a file containing the mapping data for the egress router pod, as in the following example: You can put blank lines and comments into this file. Create a ConfigMap object from the file: USD oc delete configmap egress-routes --ignore-not-found USD oc create configmap egress-routes \ --from-file=destination=my-egress-destination.txt In the command, the egress-routes value is the name of the ConfigMap object to create and my-egress-destination.txt is the name of the file that the data is read from. Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: egress-routes data: destination: | # Egress routes for Project "Test", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 # Fallback 203.0.113.27 Create an egress router pod definition and specify the configMapKeyRef stanza for the EGRESS_DESTINATION field in the environment stanza: ... env: - name: EGRESS_DESTINATION valueFrom: configMapKeyRef: name: egress-routes key: destination ... 22.11.2. Additional resources Redirect mode HTTP proxy mode DNS proxy mode 22.12. Enabling multicast for a project 22.12.1. About multicast With IP multicast, data is broadcast to many IP addresses simultaneously. Important At this time, multicast is best used for low-bandwidth coordination or service discovery and not a high-bandwidth solution. By default, network policies affect all connections in a namespace. However, multicast is unaffected by network policies. If multicast is enabled in the same namespace as your network policies, it is always allowed, even if there is a deny-all network policy. Cluster administrators should consider the implications to the exemption of multicast from network policies before enabling it. Multicast traffic between OpenShift Container Platform pods is disabled by default. If you are using the OpenShift SDN default Container Network Interface (CNI) network provider, you can enable multicast on a per-project basis. When using the OpenShift SDN network plugin in networkpolicy isolation mode: Multicast packets sent by a pod will be delivered to all other pods in the project, regardless of NetworkPolicy objects. Pods might be able to communicate over multicast even when they cannot communicate over unicast. Multicast packets sent by a pod in one project will never be delivered to pods in any other project, even if there are NetworkPolicy objects that allow communication between the projects. When using the OpenShift SDN network plugin in multitenant isolation mode: Multicast packets sent by a pod will be delivered to all other pods in the project. Multicast packets sent by a pod in one project will be delivered to pods in other projects only if each project is joined together and multicast is enabled in each joined project. 22.12.2. Enabling multicast between pods You can enable multicast between pods for your project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Run the following command to enable multicast for a project. Replace <namespace> with the namespace for the project you want to enable multicast for. USD oc annotate netnamespace <namespace> \ netnamespace.network.openshift.io/multicast-enabled=true Verification To verify that multicast is enabled for a project, complete the following procedure: Change your current project to the project that you enabled multicast for. Replace <project> with the project name. USD oc project <project> Create a pod to act as a multicast receiver: USD cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi8 command: ["/bin/sh", "-c"] args: ["dnf -y install socat hostname && sleep inf"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF Create a pod to act as a multicast sender: USD cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi8 command: ["/bin/sh", "-c"] args: ["dnf -y install socat && sleep inf"] EOF In a new terminal window or tab, start the multicast listener. Get the IP address for the Pod: USD POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}') Start the multicast listener by entering the following command: USD oc exec mlistener -i -t -- \ socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname Start the multicast transmitter. Get the pod network IP address range: USD CIDR=USD(oc get Network.config.openshift.io cluster \ -o jsonpath='{.status.clusterNetwork[0].cidr}') To send a multicast message, enter the following command: USD oc exec msender -i -t -- \ /bin/bash -c "echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64" If multicast is working, the command returns the following output: mlistener 22.13. Disabling multicast for a project 22.13.1. Disabling multicast between pods You can disable multicast between pods for your project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Disable multicast by running the following command: USD oc annotate netnamespace <namespace> \ 1 netnamespace.network.openshift.io/multicast-enabled- 1 The namespace for the project you want to disable multicast for. 22.14. Configuring network isolation using OpenShift SDN When your cluster is configured to use the multitenant isolation mode for the OpenShift SDN CNI plugin, each project is isolated by default. Network traffic is not allowed between pods or services in different projects in multitenant isolation mode. You can change the behavior of multitenant isolation for a project in two ways: You can join one or more projects, allowing network traffic between pods and services in different projects. You can disable network isolation for a project. It will be globally accessible, accepting network traffic from pods and services in all other projects. A globally accessible project can access pods and services in all other projects. 22.14.1. Prerequisites You must have a cluster configured to use the OpenShift SDN Container Network Interface (CNI) plugin in multitenant isolation mode. 22.14.2. Joining projects You can join two or more projects to allow network traffic between pods and services in different projects. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Use the following command to join projects to an existing project network: USD oc adm pod-network join-projects --to=<project1> <project2> <project3> Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option to specify projects based upon an associated label. Optional: Run the following command to view the pod networks that you have joined together: USD oc get netnamespaces Projects in the same pod-network have the same network ID in the NETID column. 22.14.3. Isolating a project You can isolate a project so that pods and services in other projects cannot access its pods and services. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure To isolate the projects in the cluster, run the following command: USD oc adm pod-network isolate-projects <project1> <project2> Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option to specify projects based upon an associated label. 22.14.4. Disabling network isolation for a project You can disable network isolation for a project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Run the following command for the project: USD oc adm pod-network make-projects-global <project1> <project2> Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option to specify projects based upon an associated label. 22.15. Configuring kube-proxy The Kubernetes network proxy (kube-proxy) runs on each node and is managed by the Cluster Network Operator (CNO). kube-proxy maintains network rules for forwarding connections for endpoints associated with services. 22.15.1. About iptables rules synchronization The synchronization period determines how frequently the Kubernetes network proxy (kube-proxy) syncs the iptables rules on a node. A sync begins when either of the following events occurs: An event occurs, such as service or endpoint is added to or removed from the cluster. The time since the last sync exceeds the sync period defined for kube-proxy. 22.15.2. kube-proxy configuration parameters You can modify the following kubeProxyConfig parameters. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. Table 22.2. Parameters Parameter Description Values Default iptablesSyncPeriod The refresh period for iptables rules. A time interval, such as 30s or 2m . Valid suffixes include s , m , and h and are described in the Go time package documentation. 30s proxyArguments.iptables-min-sync-period The minimum duration before refreshing iptables rules. This parameter ensures that the refresh does not happen too frequently. By default, a refresh starts as soon as a change that affects iptables rules occurs. A time interval, such as 30s or 2m . Valid suffixes include s , m , and h and are described in the Go time package 0s 22.15.3. Modifying the kube-proxy configuration You can modify the Kubernetes network proxy configuration for your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to a running cluster with the cluster-admin role. Procedure Edit the Network.operator.openshift.io custom resource (CR) by running the following command: USD oc edit network.operator.openshift.io cluster Modify the kubeProxyConfig parameter in the CR with your changes to the kube-proxy configuration, such as in the following example CR: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: ["30s"] Save the file and exit the text editor. The syntax is validated by the oc command when you save the file and exit the editor. If your modifications contain a syntax error, the editor opens the file and displays an error message. Enter the following command to confirm the configuration update: USD oc get networks.operator.openshift.io -o yaml Example output apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OpenShiftSDN kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: - 30s serviceNetwork: - 172.30.0.0/16 status: {} kind: List Optional: Enter the following command to confirm that the Cluster Network Operator accepted the configuration change: USD oc get clusteroperator network Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.1.0-0.9 True False False 1m The AVAILABLE field is True when the configuration update is applied successfully. | [
"IP capacity = public cloud default capacity - sum(current IP assignments)",
"cloud.network.openshift.io/egress-ipconfig: [ { \"interface\":\"eni-078d267045138e436\", \"ifaddr\":{\"ipv4\":\"10.0.128.0/18\"}, \"capacity\":{\"ipv4\":14,\"ipv6\":15} } ]",
"cloud.network.openshift.io/egress-ipconfig: [ { \"interface\":\"nic0\", \"ifaddr\":{\"ipv4\":\"10.0.128.0/18\"}, \"capacity\":{\"ip\":14} } ]",
"oc patch netnamespace <project_name> --type=merge -p '{ \"egressIPs\": [ \"<ip_address>\" ] }'",
"oc patch netnamespace project1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\"]}' oc patch netnamespace project2 --type=merge -p '{\"egressIPs\": [\"192.168.1.101\"]}'",
"oc patch hostsubnet <node_name> --type=merge -p '{ \"egressCIDRs\": [ \"<ip_address_range>\", \"<ip_address_range>\" ] }'",
"oc patch hostsubnet node1 --type=merge -p '{\"egressCIDRs\": [\"192.168.1.0/24\"]}' oc patch hostsubnet node2 --type=merge -p '{\"egressCIDRs\": [\"192.168.1.0/24\"]}'",
"oc patch netnamespace <project_name> --type=merge -p '{ \"egressIPs\": [ \"<ip_address>\" ] }'",
"oc patch netnamespace project1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\",\"192.168.1.101\"]}'",
"oc patch hostsubnet <node_name> --type=merge -p '{ \"egressIPs\": [ \"<ip_address>\", \"<ip_address>\" ] }'",
"oc patch hostsubnet node1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\", \"192.168.1.101\", \"192.168.1.102\"]}'",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow - to: cidrSelector: 0.0.0.0/0 3 type: Deny",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: <name> 1 spec: egress: 2",
"egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Allow to: dnsName: www.example.com - type: Deny to: cidrSelector: 0.0.0.0/0",
"oc create -f <policy_name>.yaml -n <project>",
"oc create -f default.yaml -n project1",
"egressnetworkpolicy.network.openshift.io/v1 created",
"oc get egressnetworkpolicy --all-namespaces",
"oc describe egressnetworkpolicy <policy_name>",
"Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0",
"oc get -n <project> egressnetworkpolicy",
"oc get -n <project> egressnetworkpolicy <name> -o yaml > <filename>.yaml",
"oc replace -f <filename>.yaml",
"oc get -n <project> egressnetworkpolicy",
"oc delete -n <project> egressnetworkpolicy <name>",
"curl <router_service_IP> <port>",
"openstack port set --allowed-address ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid>",
"apiVersion: apps/v1 kind: Deployment metadata: name: egress-demo-controller spec: replicas: 1 1 selector: matchLabels: name: egress-router template: metadata: name: egress-router labels: name: egress-router annotations: pod.network.openshift.io/assign-macvlan: \"true\" spec: 2 initContainers: containers:",
"apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress_router> - name: EGRESS_GATEWAY 3 value: <egress_gateway> - name: EGRESS_DESTINATION 4 value: <egress_destination> - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod",
"apiVersion: v1 kind: Pod metadata: name: egress-multi labels: name: egress-multi annotations: pod.network.openshift.io/assign-macvlan: \"true\" spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE value: 192.168.12.99/24 - name: EGRESS_GATEWAY value: 192.168.12.1 - name: EGRESS_DESTINATION value: | 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27 - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod",
"80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27",
"curl <router_service_IP> <port>",
"apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http port: 80 - name: https port: 443 type: ClusterIP selector: name: egress-1",
"apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: http-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-http-proxy env: - name: EGRESS_HTTP_PROXY_DESTINATION 4 value: |-",
"!*.example.com !192.168.1.0/24 192.168.2.1 *",
"apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http-proxy port: 8080 1 type: ClusterIP selector: name: egress-1",
"apiVersion: v1 kind: Pod metadata: name: app-1 labels: name: app-1 spec: containers: env: - name: http_proxy value: http://egress-1:8080/ 1 - name: https_proxy value: http://egress-1:8080/",
"apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: dns-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-dns-proxy securityContext: privileged: true env: - name: EGRESS_DNS_PROXY_DESTINATION 4 value: |- - name: EGRESS_DNS_PROXY_DEBUG 5 value: \"1\"",
"80 172.16.12.11 100 example.com",
"8080 192.168.60.252 80 8443 web.example.com 443",
"apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: type: ClusterIP selector: name: egress-dns-proxy",
"apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: - name: con1 protocol: TCP port: 80 targetPort: 80 - name: con2 protocol: TCP port: 100 targetPort: 100 type: ClusterIP selector: name: egress-dns-proxy",
"oc create -f egress-router-service.yaml",
"Egress routes for Project \"Test\", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 Fallback 203.0.113.27",
"oc delete configmap egress-routes --ignore-not-found",
"oc create configmap egress-routes --from-file=destination=my-egress-destination.txt",
"apiVersion: v1 kind: ConfigMap metadata: name: egress-routes data: destination: | # Egress routes for Project \"Test\", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 # Fallback 203.0.113.27",
"env: - name: EGRESS_DESTINATION valueFrom: configMapKeyRef: name: egress-routes key: destination",
"oc annotate netnamespace <namespace> netnamespace.network.openshift.io/multicast-enabled=true",
"oc project <project>",
"cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi8 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat hostname && sleep inf\"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF",
"cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi8 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat && sleep inf\"] EOF",
"POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}')",
"oc exec mlistener -i -t -- socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname",
"CIDR=USD(oc get Network.config.openshift.io cluster -o jsonpath='{.status.clusterNetwork[0].cidr}')",
"oc exec msender -i -t -- /bin/bash -c \"echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64\"",
"mlistener",
"oc annotate netnamespace <namespace> \\ 1 netnamespace.network.openshift.io/multicast-enabled-",
"oc adm pod-network join-projects --to=<project1> <project2> <project3>",
"oc get netnamespaces",
"oc adm pod-network isolate-projects <project1> <project2>",
"oc adm pod-network make-projects-global <project1> <project2>",
"oc edit network.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: [\"30s\"]",
"oc get networks.operator.openshift.io -o yaml",
"apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OpenShiftSDN kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: - 30s serviceNetwork: - 172.30.0.0/16 status: {} kind: List",
"oc get clusteroperator network",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.1.0-0.9 True False False 1m"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/networking/openshift-sdn-default-cni-network-provider |
Chapter 1. OpenShift Container Platform installation overview | Chapter 1. OpenShift Container Platform installation overview 1.1. About OpenShift Container Platform installation The OpenShift Container Platform installation program offers four methods for deploying a cluster which are detailed in the following list: Interactive : You can deploy a cluster with the web-based Assisted Installer . This is an ideal approach for clusters with networks connected to the internet. The Assisted Installer is the easiest way to install OpenShift Container Platform, it provides smart defaults, and it performs pre-flight validations before installing the cluster. It also provides a RESTful API for automation and advanced configuration scenarios. Local Agent-based : You can deploy a cluster locally with the Agent-based Installer for disconnected environments or restricted networks. It provides many of the benefits of the Assisted Installer, but you must download and configure the Agent-based Installer first. Configuration is done with a command-line interface. This approach is ideal for disconnected environments. Automated : You can deploy a cluster on installer-provisioned infrastructure. The installation program uses each cluster host's baseboard management controller (BMC) for provisioning. You can deploy clusters in connected or disconnected environments. Full control : You can deploy a cluster on infrastructure that you prepare and maintain, which provides maximum customizability. You can deploy clusters in connected or disconnected environments. Each method deploys a cluster with the following characteristics: Highly available infrastructure with no single points of failure, which is available by default. Administrators can control what updates are applied and when. 1.1.1. About the installation program You can use the installation program to deploy each type of cluster. The installation program generates the main assets, such as Ignition config files for the bootstrap, control plane, and compute machines. You can start an OpenShift Container Platform cluster with these three machine configurations, provided you correctly configured the infrastructure. The OpenShift Container Platform installation program uses a set of targets and dependencies to manage cluster installations. The installation program has a set of targets that it must achieve, and each target has a set of dependencies. Because each target is only concerned with its own dependencies, the installation program can act to achieve multiple targets in parallel with the ultimate target being a running cluster. The installation program recognizes and uses existing components instead of running commands to create them again because the program meets the dependencies. Figure 1.1. OpenShift Container Platform installation targets and dependencies 1.1.2. About Red Hat Enterprise Linux CoreOS (RHCOS) Post-installation, each cluster machine uses Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. RHCOS is the immutable container host version of Red Hat Enterprise Linux (RHEL) and features a RHEL kernel with SELinux enabled by default. RHCOS includes the kubelet , which is the Kubernetes node agent, and the CRI-O container runtime, which is optimized for Kubernetes. Every control plane machine in an OpenShift Container Platform 4.15 cluster must use RHCOS, which includes a critical first-boot provisioning tool called Ignition. This tool enables the cluster to configure the machines. Operating system updates are delivered as a bootable container image, using OSTree as a backend, that is deployed across the cluster by the Machine Config Operator. Actual operating system changes are made in-place on each machine as an atomic operation by using rpm-ostree . Together, these technologies enable OpenShift Container Platform to manage the operating system like it manages any other application on the cluster, by in-place upgrades that keep the entire platform up to date. These in-place updates can reduce the burden on operations teams. If you use RHCOS as the operating system for all cluster machines, the cluster manages all aspects of its components and machines, including the operating system. Because of this, only the installation program and the Machine Config Operator can change machines. The installation program uses Ignition config files to set the exact state of each machine, and the Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. 1.1.3. Glossary of common terms for OpenShift Container Platform installing The glossary defines common terms that relate to the installation content. Read the following list of terms to better understand the installation process. Assisted Installer An installer hosted at console.redhat.com that provides a web-based user interface or a RESTful API for creating a cluster configuration. The Assisted Installer generates a discovery image. Cluster machines boot with the discovery image, which installs RHCOS and an agent. Together, the Assisted Installer and agent provide preinstallation validation and installation for the cluster. Agent-based Installer An installer similar to the Assisted Installer, but you must download the Agent-based Installer first. The Agent-based Installer is ideal for disconnected environments. Bootstrap node A temporary machine that runs a minimal Kubernetes configuration required to deploy the OpenShift Container Platform control plane. Control plane A container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers. Also known as control plane machines. Compute node Nodes that are responsible for executing workloads for cluster users. Also known as worker nodes. Disconnected installation In some situations, parts of a data center might not have access to the internet, even through proxy servers. You can still install the OpenShift Container Platform in these environments, but you must download the required software and images and make them available to the disconnected environment. The OpenShift Container Platform installation program A program that provisions the infrastructure and deploys a cluster. Installer-provisioned infrastructure The installation program deploys and configures the infrastructure that the cluster runs on. Ignition config files A file that the Ignition tool uses to configure Red Hat Enterprise Linux CoreOS (RHCOS) during operating system initialization. The installation program generates different Ignition configuration files to initialize bootstrap, control plane, and worker nodes. Kubernetes manifests Specifications of a Kubernetes API object in a JSON or YAML format. A configuration file can include deployments, config maps, secrets, daemonsets, and so on. Kubelet A primary node agent that runs on each node in the cluster to ensure that containers are running in a pod. Load balancers A load balancer serves as the single point of contact for clients. Load balancers for the API distribute incoming traffic across control plane nodes. Machine Config Operator An Operator that manages and applies configurations and updates of the base operating system and container runtime, including everything between the kernel and kubelet, for the nodes in the cluster. Operators The preferred method of packaging, deploying, and managing a Kubernetes application in an OpenShift Container Platform cluster. An operator takes human operational knowledge and encodes it into software that is easily packaged and shared with customers. User-provisioned infrastructure You can install OpenShift Container Platform on infrastructure that you provide. You can use the installation program to generate the assets required to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided. 1.1.4. Installation process Except for the Assisted Installer, when you install an OpenShift Container Platform cluster, you must download the installation program from the appropriate Cluster Type page on the OpenShift Cluster Manager Hybrid Cloud Console. This console manages: REST API for accounts. Registry tokens, which are the pull secrets that you use to obtain the required components. Cluster registration, which associates the cluster identity to your Red Hat account to facilitate the gathering of usage metrics. In OpenShift Container Platform 4.15, the installation program is a Go binary file that performs a series of file transformations on a set of assets. The way you interact with the installation program differs depending on your installation type. Consider the following installation use cases: To deploy a cluster with the Assisted Installer, you must configure the cluster settings by using the Assisted Installer . There is no installation program to download and configure. After you finish setting the cluster configuration, you download a discovery ISO and then boot cluster machines with that image. You can install clusters with the Assisted Installer on Nutanix, vSphere, and bare metal with full integration, and other platforms without integration. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines. To deploy clusters with the Agent-based Installer, you can download the Agent-based Installer first. You can then configure the cluster and generate a discovery image. You boot cluster machines with the discovery image, which installs an agent that communicates with the installation program and handles the provisioning for you instead of you interacting with the installation program or setting up a provisioner machine yourself. You must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines. This approach is ideal for disconnected environments. For clusters with installer-provisioned infrastructure, you delegate the infrastructure bootstrapping and provisioning to the installation program instead of doing it yourself. The installation program creates all of the networking, machines, and operating systems that are required to support the cluster, except if you install on bare metal. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines. If you provision and manage the infrastructure for your cluster, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines. For the installation program, the program uses three sets of files during installation: an installation configuration file that is named install-config.yaml , Kubernetes manifests, and Ignition config files for your machine types. Important You can modify Kubernetes and the Ignition config files that control the underlying RHCOS operating system during installation. However, no validation is available to confirm the suitability of any modifications that you make to these objects. If you modify these objects, you might render your cluster non-functional. Because of this risk, modifying Kubernetes and Ignition config files is not supported unless you are following documented procedures or are instructed to do so by Red Hat support. The installation configuration file is transformed into Kubernetes manifests, and then the manifests are wrapped into Ignition config files. The installation program uses these Ignition config files to create the cluster. The installation configuration files are all pruned when you run the installation program, so be sure to back up all the configuration files that you want to use again. Important You cannot modify the parameters that you set during installation, but you can modify many cluster attributes after installation. The installation process with the Assisted Installer Installation with the Assisted Installer involves creating a cluster configuration interactively by using the web-based user interface or the RESTful API. The Assisted Installer user interface prompts you for required values and provides reasonable default values for the remaining parameters, unless you change them in the user interface or with the API. The Assisted Installer generates a discovery image, which you download and use to boot the cluster machines. The image installs RHCOS and an agent, and the agent handles the provisioning for you. You can install OpenShift Container Platform with the Assisted Installer and full integration on Nutanix, vSphere, and bare metal. Additionally, you can install OpenShift Container Platform with the Assisted Installer on other platforms without integration. OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied. If possible, use the Assisted Installer feature to avoid having to download and configure the Agent-based Installer. The installation process with Agent-based infrastructure Agent-based installation is similar to using the Assisted Installer, except that you must initially download and install the Agent-based Installer . An Agent-based installation is useful when you want the convenience of the Assisted Installer, but you need to install a cluster in a disconnected environment. If possible, use the Agent-based installation feature to avoid having to create a provisioner machine with a bootstrap VM, and then provision and maintain the cluster infrastructure. The installation process with installer-provisioned infrastructure The default installation type uses installer-provisioned infrastructure. By default, the installation program acts as an installation wizard, prompting you for values that it cannot determine on its own and providing reasonable default values for the remaining parameters. You can also customize the installation process to support advanced infrastructure scenarios. The installation program provisions the underlying infrastructure for the cluster. You can install either a standard cluster or a customized cluster. With a standard cluster, you provide minimum details that are required to install the cluster. With a customized cluster, you can specify more details about the platform, such as the number of machines that the control plane uses, the type of virtual machine that the cluster deploys, or the CIDR range for the Kubernetes service network. If possible, use this feature to avoid having to provision and maintain the cluster infrastructure. In all other environments, you use the installation program to generate the assets that you require to provision your cluster infrastructure. With installer-provisioned infrastructure clusters, OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied. The installation process with user-provisioned infrastructure You can also install OpenShift Container Platform on infrastructure that you provide. You use the installation program to generate the assets that you require to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided. If you do not use infrastructure that the installation program provisioned, you must manage and maintain the cluster resources yourself. The following list details some of these self-managed resources: The underlying infrastructure for the control plane and compute machines that make up the cluster Load balancers Cluster networking, including the DNS records and required subnets Storage for the cluster infrastructure and applications If your cluster uses user-provisioned infrastructure, you have the option of adding RHEL compute machines to your cluster. Installation process details When a cluster is provisioned, each machine in the cluster requires information about the cluster. OpenShift Container Platform uses a temporary bootstrap machine during initial configuration to provide the required information to the permanent control plane. The temporary bootstrap machine boots by using an Ignition config file that describes how to create the cluster. The bootstrap machine creates the control plane machines that make up the control plane. The control plane machines then create the compute machines, which are also known as worker machines. The following figure illustrates this process: Figure 1.2. Creating the bootstrap, control plane, and compute machines After the cluster machines initialize, the bootstrap machine is destroyed. All clusters use the bootstrap process to initialize the cluster, but if you provision the infrastructure for your cluster, you must complete many of the steps manually. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. Consider using Ignition config files within 12 hours after they are generated, because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Bootstrapping a cluster involves the following steps: The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot. If you provision the infrastructure, this step requires manual intervention. The bootstrap machine starts a single-node etcd cluster and a temporary Kubernetes control plane. The control plane machines fetch the remote resources from the bootstrap machine and finish booting. If you provision the infrastructure, this step requires manual intervention. The temporary control plane schedules the production control plane to the production control plane machines. The Cluster Version Operator (CVO) comes online and installs the etcd Operator. The etcd Operator scales up etcd on all control plane nodes. The temporary control plane shuts down and passes control to the production control plane. The bootstrap machine injects OpenShift Container Platform components into the production control plane. The installation program shuts down the bootstrap machine. If you provision the infrastructure, this step requires manual intervention. The control plane sets up the compute nodes. The control plane installs additional services in the form of a set of Operators. The result of this bootstrapping process is a running OpenShift Container Platform cluster. The cluster then downloads and configures remaining components needed for the day-to-day operations, including the creation of compute machines in supported environments. Additional resources Red Hat OpenShift Network Calculator 1.1.5. Verifying node state after installation The OpenShift Container Platform installation completes when the following installation health checks are successful: The provisioner can access the OpenShift Container Platform web console. All control plane nodes are ready. All cluster Operators are available. Note After the installation completes, the specific cluster Operators responsible for the worker nodes continuously attempt to provision all worker nodes. Some time is required before all worker nodes report as READY . For installations on bare metal, wait a minimum of 60 minutes before troubleshooting a worker node. For installations on all other platforms, wait a minimum of 40 minutes before troubleshooting a worker node. A DEGRADED state for the cluster Operators responsible for the worker nodes depends on the Operators' own resources and not on the state of the nodes. After your installation completes, you can continue to monitor the condition of the nodes in your cluster. Prerequisites The installation program resolves successfully in the terminal. Procedure Show the status of all worker nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION example-compute1.example.com Ready worker 13m v1.21.6+bb8d50a example-compute2.example.com Ready worker 13m v1.21.6+bb8d50a example-compute4.example.com Ready worker 14m v1.21.6+bb8d50a example-control1.example.com Ready master 52m v1.21.6+bb8d50a example-control2.example.com Ready master 55m v1.21.6+bb8d50a example-control3.example.com Ready master 55m v1.21.6+bb8d50a Show the phase of all worker machine nodes: USD oc get machines -A Example output NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api example-zbbt6-master-0 Running 95m openshift-machine-api example-zbbt6-master-1 Running 95m openshift-machine-api example-zbbt6-master-2 Running 95m openshift-machine-api example-zbbt6-worker-0-25bhp Running 49m openshift-machine-api example-zbbt6-worker-0-8b4c2 Running 49m openshift-machine-api example-zbbt6-worker-0-jkbqt Running 49m openshift-machine-api example-zbbt6-worker-0-qrl5b Running 49m Additional resources Getting the BareMetalHost resource Following the progress of the installation Validating an installation Agent-based Installer Assisted Installer for OpenShift Container Platform Installation scope The scope of the OpenShift Container Platform installation program is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more configuration tasks after installation completes. Additional resources See Available cluster customizations for details about OpenShift Container Platform configuration resources. 1.1.6. OpenShift Local overview OpenShift Local supports rapid application development to get started building OpenShift Container Platform clusters. OpenShift Local is designed to run on a local computer to simplify setup and testing, and to emulate the cloud development environment locally with all of the tools needed to develop container-based applications. Regardless of the programming language you use, OpenShift Local hosts your application and brings a minimal, preconfigured Red Hat OpenShift Container Platform cluster to your local PC without the need for a server-based infrastructure. On a hosted environment, OpenShift Local can create microservices, convert them into images, and run them in Kubernetes-hosted containers directly on your laptop or desktop running Linux, macOS, or Windows 10 or later. For more information about OpenShift Local, see Red Hat OpenShift Local Overview . 1.2. Supported platforms for OpenShift Container Platform clusters In OpenShift Container Platform 4.15, you can install a cluster that uses installer-provisioned infrastructure on the following platforms: Alibaba Cloud Amazon Web Services (AWS) Bare metal Google Cloud Platform (GCP) IBM Cloud(R) Microsoft Azure Microsoft Azure Stack Hub Nutanix Red Hat OpenStack Platform (RHOSP) The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix . VMware vSphere For these clusters, all machines, including the computer that you run the installation process on, must have direct internet access to pull images for platform containers and provide telemetry data to Red Hat. Important After installation, the following changes are not supported: Mixing cloud provider platforms. Mixing cloud provider components. For example, using a persistent storage framework from a another platform on the platform where you installed the cluster. In OpenShift Container Platform 4.15, you can install a cluster that uses user-provisioned infrastructure on the following platforms: AWS Azure Azure Stack Hub Bare metal GCP IBM Power(R) IBM Z(R) or IBM(R) LinuxONE RHOSP The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix . VMware Cloud on AWS VMware vSphere Depending on the supported cases for the platform, you can perform installations on user-provisioned infrastructure, so that you can run machines with full internet access, place your cluster behind a proxy, or perform a disconnected installation. In a disconnected installation, you can download the images that are required to install a cluster, place them in a mirror registry, and use that data to install your cluster. While you require internet access to pull images for platform containers, with a disconnected installation on vSphere or bare metal infrastructure, your cluster machines do not require direct internet access. The OpenShift Container Platform 4.x Tested Integrations page contains details about integration testing for different platforms. Additional resources See Supported installation methods for different platforms for more information about the types of installations that are available for each supported platform. See Selecting a cluster installation method and preparing it for users for information about choosing an installation method and preparing the required resources. Red Hat OpenShift Network Calculator can help you design your cluster network during both the deployment and expansion phases. It addresses common questions related to the cluster network and provides output in a convenient JSON format. | [
"oc get nodes",
"NAME STATUS ROLES AGE VERSION example-compute1.example.com Ready worker 13m v1.21.6+bb8d50a example-compute2.example.com Ready worker 13m v1.21.6+bb8d50a example-compute4.example.com Ready worker 14m v1.21.6+bb8d50a example-control1.example.com Ready master 52m v1.21.6+bb8d50a example-control2.example.com Ready master 55m v1.21.6+bb8d50a example-control3.example.com Ready master 55m v1.21.6+bb8d50a",
"oc get machines -A",
"NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api example-zbbt6-master-0 Running 95m openshift-machine-api example-zbbt6-master-1 Running 95m openshift-machine-api example-zbbt6-master-2 Running 95m openshift-machine-api example-zbbt6-worker-0-25bhp Running 49m openshift-machine-api example-zbbt6-worker-0-8b4c2 Running 49m openshift-machine-api example-zbbt6-worker-0-jkbqt Running 49m openshift-machine-api example-zbbt6-worker-0-qrl5b Running 49m"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installation_overview/ocp-installation-overview |
3.6. Testing the New Client | 3.6. Testing the New Client Check that the client can obtain information about users defined on the server. For example, to check the default admin user: | [
"[user@client ~]USD id admin uid=1254400000(admin) gid=1254400000(admins) groups=1254400000(admins)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/client-test |
Chapter 1. Introduction | Chapter 1. Introduction 1.1. What Is Red Hat Single Sign-On? Red Hat Single Sign-On is an integrated sign-on solution available as a Red Hat JBoss Middleware for OpenShift containerized image. The Red Hat Single Sign-On for OpenShift image provides an authentication server for users to centrally log in, log out, register, and manage user accounts for web applications, mobile applications, and RESTful web services. Red Hat Single Sign-On for OpenShift on OpenJDK is only available on the following platforms: x86_64 . For other available platforms, see Red Hat Single Sign-On for OpenShift on Eclipse OpenJ9 . Red Hat offers multiple OpenShift application templates utilizing the Red Hat Single Sign-On for OpenShift image version number 7.4.10.GA. These define the resources needed to develop Red Hat Single Sign-On 7.4.10.GA server based deployment and can be split into the following two categories: Templates using HTTPS and JGroups keystores and a truststore for the Red Hat Single Sign-On server, all prepared beforehand. These secure the TLS communication using passthrough TLS termination : sso74-https : Red Hat Single Sign-On 7.4.10.GA backed by internal H2 database on the same pod. sso74-postgresql : Red Hat Single Sign-On 7.4.10.GA backed by ephemeral PostgreSQL database on a separate pod. sso74-postgresql-persistent : Red Hat Single Sign-On 7.4.10.GA backed by persistent PostgreSQL database on a separate pod. Note Templates for using Red Hat Single Sign-On with MySQL / MariaDB databases have been removed and are not available since Red Hat Single Sign-On version 7.4. Templates using OpenShift's internal service serving x509 certificate secrets to automatically create the HTTPS keystore used for serving secure content. The JGroups cluster traffic is authenticated using the AUTH protocol and encrypted using the ASYM_ENCRYPT protocol. The Red Hat Single Sign-On server truststore is also created automatically, containing the /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt CA certificate file, which is used to sign the certificate for HTTPS keystore. Moreover, the truststore for the Red Hat Single Sign-On server is pre-populated with the all known, trusted CA certificate files found in the Java system path. These templates secure the TLS communication using re-encryption TLS termination : sso74-x509-https : Red Hat Single Sign-On 7.4.10.GA with auto-generated HTTPS keystore and Red Hat Single Sign-On truststore, backed by internal H2 database. The ASYM_ENCRYPT JGroups protocol is used for encryption of cluster traffic. sso74-x509-postgresql-persistent : Red Hat Single Sign-On 7.4.10.GA with auto-generated HTTPS keystore and Red Hat Single Sign-On truststore, backed by persistent PostgreSQL database. The ASYM_ENCRYPT JGroups protocol is used for encryption of cluster traffic. Other templates that integrate with Red Hat Single Sign-On are also available: eap64-sso-s2i : Red Hat Single Sign-On-enabled Red Hat JBoss Enterprise Application Platform 6.4. eap71-sso-s2i : Red Hat Single Sign-On-enabled Red Hat JBoss Enterprise Application Platform 7.1. datavirt63-secure-s2i : Red Hat Single Sign-On-enabled Red Hat JBoss Data Virtualization 6.3. These templates contain environment variables specific to Red Hat Single Sign-On that enable automatic Red Hat Single Sign-On client registration when deployed. See Automatic and Manual Red Hat Single Sign-On Client Registration Methods for more information. | null | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/red_hat_single_sign-on_for_openshift_on_openjdk/introduction |
Chapter 65. Test scenario template | Chapter 65. Test scenario template Before specifying test scenario definitions, you need to create a test scenario template. The header of the test scenario table defines the template for each scenario. You need to set the types of the instance and property headers for both the GIVEN and EXPECT sections. Instance headers map to a particular data object (a fact), whereas the property headers map to a particular field of the corresponding data object. Using the test scenarios designer, you can create test scenario templates for both rule-based and DMN-based test scenarios. 65.1. Creating a test scenario template for rule-based test scenarios Create a test scenario template for rule-based test scenarios by following the procedure below to validate your rules and data. Procedure In Business Central, go to Menu Design Projects and click the project for which you want to create the test scenario. Click Add Asset Test Scenario . Enter a Test Scenario name and select the appropriate Package . The package you select must contain all the required data objects and rule assets have been assigned or will be assigned. Select RULE as the Source type . Click Ok to create and open the test scenario in the test scenarios designer. To map the GIVEN column header to a data object: Figure 65.1. Test scenario GIVEN header cells Select an instance header cell in the GIVEN section. Select the data object from the Test Tools tab. Click Insert Data Object . To map the EXPECT column header to a data object: Figure 65.2. Test scenario EXPECT header cells Select an instance header cell in the EXPECT section. Select the data object from the Test Tools tab. Click Insert Data Object . To map a data object field to a property cell: Select an instance header cell or property header cell. Select the data object field from the Test Tools tab. Click Insert Data Object . To insert more properties of the data object, right-click the property header and select Insert column right or Insert column left as required. To define a java method to a property cell during test scenarios execution: Select an instance header cell or property header cell. Select the data object field from the Test Tools tab. Click Insert Data Object . Use the MVEL expression with the prefix # to define a java method for test scenario execution. To insert more properties of the data object, right-click the property header cell and select Insert column right or Insert column left as required. Use the contextual menu to add or remove columns and rows as needed. For more details about the expression syntax in rule-based scenarios, see Section 70.1, "Expression syntax in rule-based test scenarios" . 65.2. Using aliases in rule-based test scenarios In the test scenarios designer, once you map a header cell with a data object, the data object is removed from the Test Tools tab. You can re-map a data object to another header cell by using an alias. Aliases enable you to specify multiple instances of the same data object in a test scenario. You can also create property aliases to rename the used properties directly in the table. Procedure In the test scenarios designer in Business Central, double-click a header cell and manually change the name. Ensure that the aliases are uniquely named. The instance now appears in the list of data objects in the Test Tools tab. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/test-designer-create-test-scenario-template-con |
Chapter 4. Tracing | Chapter 4. Tracing 4.1. Tracing requests Distributed tracing records the path of a request through the various services that make up an application. It is used to tie information about different units of work together, to understand a whole chain of events in a distributed transaction. The units of work might be executed in different processes or hosts. 4.1.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use distributed tracing for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With distributed tracing you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis Red Hat OpenShift distributed tracing consists of two main components: Red Hat OpenShift distributed tracing platform - This component is based on the open source Jaeger project . Red Hat OpenShift distributed tracing data collection - This component is based on the open source OpenTelemetry project . Both of these components are based on the vendor-neutral OpenTracing APIs and instrumentation. 4.1.2. Additional resources for OpenShift Container Platform Red Hat OpenShift distributed tracing architecture Installing distributed tracing 4.2. Using Red Hat OpenShift distributed tracing You can use Red Hat OpenShift distributed tracing with OpenShift Serverless to monitor and troubleshoot serverless applications. 4.2.1. Using Red Hat OpenShift distributed tracing to enable distributed tracing Red Hat OpenShift distributed tracing is made up of several components that work together to collect, store, and display tracing data. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed Red Hat OpenShift distributed tracing by following the OpenShift Container Platform "Installing distributed tracing" documentation. You have installed the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create an OpenTelemetryCollector custom resource (CR): Example OpenTelemetryCollector CR apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: <namespace> spec: mode: deployment config: | receivers: zipkin: processors: exporters: jaeger: endpoint: jaeger-all-in-one-inmemory-collector-headless.tracing-system.svc:14250 tls: ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" logging: service: pipelines: traces: receivers: [zipkin] processors: [] exporters: [jaeger, logging] Verify that you have two pods running in the namespace where Red Hat OpenShift distributed tracing is installed: USD oc get pods -n <namespace> Example output NAME READY STATUS RESTARTS AGE cluster-collector-collector-85c766b5c-b5g99 1/1 Running 0 5m56s jaeger-all-in-one-inmemory-ccbc9df4b-ndkl5 2/2 Running 0 15m Verify that the following headless services have been created: USD oc get svc -n <namespace> | grep headless Example output cluster-collector-collector-headless ClusterIP None <none> 9411/TCP 7m28s jaeger-all-in-one-inmemory-collector-headless ClusterIP None <none> 9411/TCP,14250/TCP,14267/TCP,14268/TCP 16m These services are used to configure Jaeger, Knative Serving, and Knative Eventing. The name of the Jaeger service may vary. Install the OpenShift Serverless Operator by following the "Installing the OpenShift Serverless Operator" documentation. Install Knative Serving by creating the following KnativeServing CR: Example KnativeServing CR apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: config: tracing: backend: "zipkin" zipkin-endpoint: "http://cluster-collector-collector-headless.tracing-system.svc:9411/api/v2/spans" debug: "false" sample-rate: "0.1" 1 1 The sample-rate defines sampling probability. Using sample-rate: "0.1" means that 1 in 10 traces are sampled. Install Knative Eventing by creating the following KnativeEventing CR: Example KnativeEventing CR apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: tracing: backend: "zipkin" zipkin-endpoint: "http://cluster-collector-collector-headless.tracing-system.svc:9411/api/v2/spans" debug: "false" sample-rate: "0.1" 1 1 The sample-rate defines sampling probability. Using sample-rate: "0.1" means that 1 in 10 traces are sampled. Create a Knative service: Example service apiVersion: serving.knative.dev/v1 kind: Service metadata: name: helloworld-go spec: template: metadata: labels: app: helloworld-go annotations: autoscaling.knative.dev/minScale: "1" autoscaling.knative.dev/target: "1" spec: containers: - image: quay.io/openshift-knative/helloworld:v1.2 imagePullPolicy: Always resources: requests: cpu: "200m" env: - name: TARGET value: "Go Sample v1" Make some requests to the service: Example HTTPS request USD curl https://helloworld-go.example.com Get the URL for the Jaeger web console: Example command USD oc get route jaeger-all-in-one-inmemory -o jsonpath='{.spec.host}' -n <namespace> You can now examine traces by using the Jaeger console. 4.3. Using Jaeger distributed tracing If you do not want to install all of the components of Red Hat OpenShift distributed tracing, you can still use distributed tracing on OpenShift Container Platform with OpenShift Serverless. 4.3.1. Configuring Jaeger to enable distributed tracing To enable distributed tracing using Jaeger, you must install and configure Jaeger as a standalone integration. Prerequisites You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. You have installed the OpenShift Serverless Operator, Knative Serving, and Knative Eventing. You have installed the Red Hat OpenShift distributed tracing platform Operator. You have installed the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads. Procedure Create and apply a Jaeger custom resource (CR) that contains the following: Jaeger CR apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger namespace: default Enable tracing for Knative Serving, by editing the KnativeServing CR and adding a YAML configuration for tracing: Tracing YAML example for Serving apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: config: tracing: sample-rate: "0.1" 1 backend: zipkin 2 zipkin-endpoint: "http://jaeger-collector.default.svc.cluster.local:9411/api/v2/spans" 3 debug: "false" 4 1 The sample-rate defines sampling probability. Using sample-rate: "0.1" means that 1 in 10 traces are sampled. 2 backend must be set to zipkin . 3 The zipkin-endpoint must point to your jaeger-collector service endpoint. To get this endpoint, substitute the namespace where the Jaeger CR is applied. 4 Debugging should be set to false . Enabling debug mode by setting debug: "true" allows all spans to be sent to the server, bypassing sampling. Enable tracing for Knative Eventing by editing the KnativeEventing CR: Tracing YAML example for Eventing apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: tracing: sample-rate: "0.1" 1 backend: zipkin 2 zipkin-endpoint: "http://jaeger-collector.default.svc.cluster.local:9411/api/v2/spans" 3 debug: "false" 4 1 The sample-rate defines sampling probability. Using sample-rate: "0.1" means that 1 in 10 traces are sampled. 2 Set backend to zipkin . 3 Point the zipkin-endpoint to your jaeger-collector service endpoint. To get this endpoint, substitute the namespace where the Jaeger CR is applied. 4 Debugging should be set to false . Enabling debug mode by setting debug: "true" allows all spans to be sent to the server, bypassing sampling. Verification You can access the Jaeger web console to see tracing data, by using the jaeger route. Get the jaeger route's hostname by entering the following command: USD oc get route jaeger -n default Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD jaeger jaeger-default.apps.example.com jaeger-query <all> reencrypt None Open the endpoint address in your browser to view the console. | [
"apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: <namespace> spec: mode: deployment config: | receivers: zipkin: processors: exporters: jaeger: endpoint: jaeger-all-in-one-inmemory-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" logging: service: pipelines: traces: receivers: [zipkin] processors: [] exporters: [jaeger, logging]",
"oc get pods -n <namespace>",
"NAME READY STATUS RESTARTS AGE cluster-collector-collector-85c766b5c-b5g99 1/1 Running 0 5m56s jaeger-all-in-one-inmemory-ccbc9df4b-ndkl5 2/2 Running 0 15m",
"oc get svc -n <namespace> | grep headless",
"cluster-collector-collector-headless ClusterIP None <none> 9411/TCP 7m28s jaeger-all-in-one-inmemory-collector-headless ClusterIP None <none> 9411/TCP,14250/TCP,14267/TCP,14268/TCP 16m",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: config: tracing: backend: \"zipkin\" zipkin-endpoint: \"http://cluster-collector-collector-headless.tracing-system.svc:9411/api/v2/spans\" debug: \"false\" sample-rate: \"0.1\" 1",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: tracing: backend: \"zipkin\" zipkin-endpoint: \"http://cluster-collector-collector-headless.tracing-system.svc:9411/api/v2/spans\" debug: \"false\" sample-rate: \"0.1\" 1",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: helloworld-go spec: template: metadata: labels: app: helloworld-go annotations: autoscaling.knative.dev/minScale: \"1\" autoscaling.knative.dev/target: \"1\" spec: containers: - image: quay.io/openshift-knative/helloworld:v1.2 imagePullPolicy: Always resources: requests: cpu: \"200m\" env: - name: TARGET value: \"Go Sample v1\"",
"curl https://helloworld-go.example.com",
"oc get route jaeger-all-in-one-inmemory -o jsonpath='{.spec.host}' -n <namespace>",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger namespace: default",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: config: tracing: sample-rate: \"0.1\" 1 backend: zipkin 2 zipkin-endpoint: \"http://jaeger-collector.default.svc.cluster.local:9411/api/v2/spans\" 3 debug: \"false\" 4",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: tracing: sample-rate: \"0.1\" 1 backend: zipkin 2 zipkin-endpoint: \"http://jaeger-collector.default.svc.cluster.local:9411/api/v2/spans\" 3 debug: \"false\" 4",
"oc get route jaeger -n default",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD jaeger jaeger-default.apps.example.com jaeger-query <all> reencrypt None"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/observability/tracing |
10.7. Managing Host Groups | 10.7. Managing Host Groups Host groups are a way of centralizing control over important management tasks, particularly access control. All groups in Identity Management are essentially static groups, meaning that the members of the group are manually and explicitly added to the group. Tangentially, IdM allows nested groups , where a group is a member of another group. In that case, all of the group members of the member group automatically belong to the parent group, as well. Because groups are easy to create, it is possible to be very flexible in what groups to create and how they are organized. Groups can be defined around organizational divisions like departments, physical locations, or IdM or infrastructure usage guidelines for access controls. 10.7.1. Creating Host Groups 10.7.1.1. Creating Host Groups from the Web UI Open the Identity tab, and select the Host Groups subtab. Click the Add link at the top of the groups list. Enter the name and a description for the group. Click the Add and Edit button to go immediately to the member selection page. Select the members, as described in Section 10.7.2.2, "Adding Host Group Members from the Web UI" . 10.7.1.2. Creating Host Groups from the Command Line New groups are created using the hostgroup-add command. (This adds only the group; members are added separately.) Two attributes are always required: the group name and the group description. If those attributes are not given as arguments, then the script prompts for them. 10.7.2. Adding Host Group Members 10.7.2.1. Showing and Changing Group Members Members can be added to a group through the group configuration. There are tabs for all the member types which can belong to the group, and an administrator picks all of the matching entries and adds them as members. However, it is also possible for an entity to be added to a group through its own configuration. Each entry has a list of tabs that displays group types that the entry can join. The list of all groups of that type is displayed, and the entity can be added to multiple groups at the same time. Figure 10.2. Member Of... 10.7.2.2. Adding Host Group Members from the Web UI Open the Identity tab, and select the Host Groups subtab. Click the name of the group to which to add members. Click the Add link at the top of the task area. Click the checkbox by the names of the hosts to add, and click the right arrows button, >> , to move the hosts to the selection box. Click the Add button. 10.7.2.3. Adding Host Group Members from the Command Line Members are added to a host group using the hostgroup-add-member command. This command can add both hosts as group members and other groups as group members. The syntax of the hostgroup-add-member command requires only the group name and a comma-separated list of hosts to add: For example, this adds three hosts to the caligroup group: Likewise, other groups can be added as members, which creates nested groups: | [
"ipa hostgroup-add groupName --desc=\" description \"",
"ipa hostgroup-add-member groupName [--hosts= list ] [--hostgroups= list ]",
"ipa hostgroup-add-member caligroup --hosts=ipaserver.example.com,client1.example.com,client2.example.com Group name: caligroup Description: for machines in california GID: 387115842 Member hosts: ipaserver.example.com,client1.example.com,client2.example.com ------------------------- Number of members added 3 -------------------------",
"ipa hostgroup-add-member caligroup --groups=mountainview,sandiego Group name: caligroup Description: for machines in california GID: 387115842 Member groups: mountainview,sandiego ------------------------- Number of members added 2 -------------------------"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/host-groups |
Chapter 10. Understanding and creating service accounts | Chapter 10. Understanding and creating service accounts 10.1. Service accounts overview A service account is an OpenShift Container Platform account that allows a component to directly access the API. Service accounts are API objects that exist within each project. Service accounts provide a flexible way to control API access without sharing a regular user's credentials. When you use the OpenShift Container Platform CLI or web console, your API token authenticates you to the API. You can associate a component with a service account so that they can access the API without using a regular user's credentials. For example, service accounts can allow: Replication controllers to make API calls to create or delete pods Applications inside containers to make API calls for discovery purposes External applications to make API calls for monitoring or integration purposes Each service account's user name is derived from its project and name: system:serviceaccount:<project>:<name> Every service account is also a member of two groups: Group Description system:serviceaccounts Includes all service accounts in the system. system:serviceaccounts:<project> Includes all service accounts in the specified project. 10.1.1. Automatically generated image pull secrets By default, OpenShift Container Platform creates an image pull secret for each service account. Note Prior to OpenShift Container Platform 4.16, a long-lived service account API token secret was also generated for each service account that was created. Starting with OpenShift Container Platform 4.16, this service account API token secret is no longer created. After upgrading to 4.17, any existing long-lived service account API token secrets are not deleted and will continue to function. For information about detecting long-lived API tokens that are in use in your cluster or deleting them if they are not needed, see the Red Hat Knowledgebase article Long-lived service account API tokens in OpenShift Container Platform . This image pull secret is necessary to integrate the OpenShift image registry into the cluster's user authentication and authorization system. However, if you do not enable the ImageRegistry capability or if you disable the integrated OpenShift image registry in the Cluster Image Registry Operator's configuration, an image pull secret is not generated for each service account. When the integrated OpenShift image registry is disabled on a cluster that previously had it enabled, the previously generated image pull secrets are deleted automatically. 10.2. Creating service accounts You can create a service account in a project and grant it permissions by binding it to a role. Procedure Optional: To view the service accounts in the current project: USD oc get sa Example output NAME SECRETS AGE builder 1 2d default 1 2d deployer 1 2d To create a new service account in the current project: USD oc create sa <service_account_name> 1 1 To create a service account in a different project, specify -n <project_name> . Example output serviceaccount "robot" created Tip You can alternatively apply the following YAML to create the service account: apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project> Optional: View the secrets for the service account: USD oc describe sa robot Example output Name: robot Namespace: project1 Labels: <none> Annotations: openshift.io/internal-registry-pull-secret-ref: robot-dockercfg-qzbhb Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: <none> Events: <none> 10.3. Granting roles to service accounts You can grant roles to service accounts in the same way that you grant roles to a regular user account. You can modify the service accounts for the current project. For example, to add the view role to the robot service account in the top-secret project: USD oc policy add-role-to-user view system:serviceaccount:top-secret:robot Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: top-secret roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: robot namespace: top-secret You can also grant access to a specific service account in a project. For example, from the project to which the service account belongs, use the -z flag and specify the <service_account_name> USD oc policy add-role-to-user <role_name> -z <service_account_name> Important If you want to grant access to a specific service account in a project, use the -z flag. Using this flag helps prevent typos and ensures that access is granted to only the specified service account. Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <rolebinding_name> namespace: <current_project_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <role_name> subjects: - kind: ServiceAccount name: <service_account_name> namespace: <current_project_name> To modify a different namespace, you can use the -n option to indicate the project namespace it applies to, as shown in the following examples. For example, to allow all service accounts in all projects to view resources in the my-project project: USD oc policy add-role-to-group view system:serviceaccounts -n my-project Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts To allow all service accounts in the managers project to edit resources in the my-project project: USD oc policy add-role-to-group edit system:serviceaccounts:managers -n my-project Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: edit namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:managers | [
"system:serviceaccount:<project>:<name>",
"oc get sa",
"NAME SECRETS AGE builder 1 2d default 1 2d deployer 1 2d",
"oc create sa <service_account_name> 1",
"serviceaccount \"robot\" created",
"apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>",
"oc describe sa robot",
"Name: robot Namespace: project1 Labels: <none> Annotations: openshift.io/internal-registry-pull-secret-ref: robot-dockercfg-qzbhb Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: <none> Events: <none>",
"oc policy add-role-to-user view system:serviceaccount:top-secret:robot",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: top-secret roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: robot namespace: top-secret",
"oc policy add-role-to-user <role_name> -z <service_account_name>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <rolebinding_name> namespace: <current_project_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <role_name> subjects: - kind: ServiceAccount name: <service_account_name> namespace: <current_project_name>",
"oc policy add-role-to-group view system:serviceaccounts -n my-project",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts",
"oc policy add-role-to-group edit system:serviceaccounts:managers -n my-project",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: edit namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:managers"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/authentication_and_authorization/understanding-and-creating-service-accounts |
Chapter 6. Configuring the system and running tests by using Cockpit | Chapter 6. Configuring the system and running tests by using Cockpit To run the certification tests by using Cockpit you need to upload the test plan to the HUT first. After running the tests, download the results and review them. This chapter contains the following topics: Section 6.1, "Setting up the Cockpit server" Section 6.2, "Adding the host under test to Cockpit" Section 6.3, "Getting authorization on the Red Hat SSO network" Section 6.4, "Downloading test plans in Cockpit from Red Hat certification portal" Section 6.5, "Using the test plan to prepare the host under test for testing" Section 6.6, "Running the certification tests using Cockpit" Section 6.7, "Reviewing and downloading the test results file" Section 6.8, "Submitting the test results from Cockpit to the Red Hat Certification Portal" Section 6.9, "Uploading the results file of the executed test plan to Red Hat Certification portal" 6.1. Setting up the Cockpit server Cockpit is a RHEL tool that lets you change the configuration of your systems as well as monitor their resources from a user-friendly web-based interface. Note You must set up Cockpit on a new system, which is separate from the host under test. Ensure that the Cockpit has access to the host under test. For more information on installing and configuring Cockpit, see Getting Started using the RHEL web console on RHEL 8, Getting Started using the RHEL web console on RHEL 9 and Introducing Cockpit . Prerequisites The Cockpit server has RHEL version 8 or 9 installed. You have installed the Cockpit plugin on your system. You have enabled the Cockpit service. Procedure Log in to the system where you installed Cockpit. Install the Cockpit RPM provided by the Red Hat Certification team. You must run Cockpit on port 9090. 6.2. Adding the host under test to Cockpit Adding the host under test (HUT) to Cockpit lets the two systems communicate by using passwordless SSH. Prerequisites You have the IP address or hostname of the HUT. Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser to launch the Cockpit web application. Enter the username and password, and then click Login . Click the down-arrow on the logged-in cockpit user name-> Add new host . The dialog box displays. In the Host field, enter the IP address or hostname of the system. In the User name field, enter the name you want to assign to this system. Optional: Select the predefined color or select a new color of your choice for the host added. Click Add . Click Accept key and connect to let Cockpit communicate with the system through passwordless SSH. Enter the Password . Select the Authorize SSH Key checkbox. Click Log in . Verification On the left panel, click Tools -> Red Hat Certification . Verify that the system you just added displays under the Hosts section on the right. 6.3. Getting authorization on the Red Hat SSO network Procedure Enter http://<Cockpit_system_IP>:9090/ in your browser's address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. On the Cockpit homepage, click Authorize , to establish connectivity with the Red Hat system. The Log in to your Red Hat account page displays. Enter your credentials and click . The Grant access to rhcert-cwe page displays. Click Grant access . A confirmation message displays a successful device login. You are now connected to the Cockpit web application. 6.4. Downloading test plans in Cockpit from Red Hat certification portal For Non-authorized or limited access users: To download the test plan, see Downloading the test plan from Red Hat Certification portal . For authorized users: Procedure Enter http://<Cockpit_system_IP>:9090/ in your browser's address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Test Plans tab. A list of Recent Certification Support Cases will appear. Click Download Test Plan . A message displays confirming the successful addition of the test plan. The downloaded test plan will be listed under the File Name of the Test Plan Files section. 6.5. Using the test plan to prepare the host under test for testing Provisioning the host under test performs a number of operations, such as setting up passwordless SSH communication with the cockpit, installing the required packages on your system based on the certification type, and creating a final test plan to run, which is a list of common tests taken from both the test plan provided by Red Hat and tests generated on discovering the system requirements. For instance, required hardware packages will be installed if the test plan is designed for certifying a hardware product. Prerequisites You have downloaded the test plan provided by Red Hat . Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Hosts tab, and then click the host under test on which you want to run the tests. Click Provision . A dialog box appears. Click Upload, and then select the new test plan .xml file. Then, click . A successful upload message is displayed. Optionally, if you want to reuse the previously uploaded test plan, then select it again to reupload. Note During the certification process, if you receive a redesigned test plan for the ongoing product certification, then you can upload it following the step. However, you must run rhcert-clean all in the Terminal tab before proceeding. In the Role field, select Host under test and click Submit . By default, the file is uploaded to path:`/var/rhcert/plans/<testplanfile.xml>` 6.6. Running the certification tests using Cockpit Prerequisites You have prepared the host under test . Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and click Login . Select Tools Red Hat Certification in the left panel. Click the Hosts tab and click on the host on which you want to run the tests. Click the Terminal tab and select Run. A list of recommended tests based on the test plan uploaded displays. The final test plan to run is a list of common tests taken from both the test plan provided by Red Hat and tests generated on discovering the system requirements. When prompted, choose whether to run each test by typing yes or no . You can also run particular tests from the list by typing select . 6.7. Reviewing and downloading the test results file Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Result Files tab to view the test results generated. Optional: Click Preview to view the results of each test. Click Download beside the result files. By default, the result file is saved as /var/rhcert/save/hostname-date-time.xml . 6.8. Submitting the test results from Cockpit to the Red Hat Certification Portal Procedure Enter http://<Cockpit_system_IP>:9090/ in your browser's address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Result Files tab and select the case number from the displayed list. For the authorized users click Submit . A message displays confirming the successful upload of the test result file. For non-authorized users see, Uploading the results file of the executed test plan to Red Hat Certification portal . The test result file of the executed test plan will be uploaded to the Red Hat Certification portal. 6.9. Uploading the results file of the executed test plan to Red Hat Certification portal Prerequisites You have downloaded the test results file from either Cockpit or the HUT directly. Procedure Log in to Red Hat Certification portal . On the homepage, enter the product case number in the search bar. Select the case number from the list that is displayed. On the Summary tab, under the Files section, click Upload . steps Red Hat will review the results file you submitted and suggest the steps. For more information, visit Red Hat Certification portal . | [
"yum install redhat-certification-cockpit"
] | https://docs.redhat.com/en/documentation/red_hat_certified_cloud_and_service_provider_certification/2025/html/red_hat_certified_cloud_and_service_provider_certification_for_red_hat_enterprise_linux_for_sap_images_workflow_guide/assembly_cloud-wf-configuring-system-and-running-tests-by-using-Cockpit_cloud-instance-wf-setting-test-environment |
Chapter 1. Workflow for deploying a single hyperconverged host | Chapter 1. Workflow for deploying a single hyperconverged host Check requirements. Verify that your planned deployment meets support requirements: Requirements , and fill in the installation checklist so that you can refer to it during the deployment process. Install operating systems. Install an operating system on each physical machine that will act as a hyperconverged host: Installing hyperconverged hosts . (Optional) Install an operating system on each physical or virtual machine that will act as an Network-Bound Disk Encryption (NBDE) key server: Installing NBDE key servers . Modify firewall rules for additional software. (Optional) Modify firewall rules for disk encryption: Section 5.1, "Modifying firewall rules for disk encryption" . Configure authentication between hyperconverged hosts. Configure key-based SSH authentication without a password to enable automated configuration of the hosts: Configure key-based SSH without a password . (Optional) Configure disk encryption. Configure NBDE key servers . Configure hyperconverged hosts as NBDE clients . Configure the hyperconverged node. Browse to the Web Console and deploy a single hyperconverged node . | null | https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization_on_a_single_node/workflow-deploy-single-node |
6.8 Technical Notes | 6.8 Technical Notes Red Hat Enterprise Linux 6.8 Technical Notes for Red Hat Enterprise Linux 6.8 Edition 8 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_technical_notes/index |
Chapter 5. Security realms | Chapter 5. Security realms Security realms integrate Data Grid Server deployments with the network protocols and infrastructure in your environment that control access and verify user identities. 5.1. Creating security realms Add security realms to Data Grid Server configuration to control access to deployments. You can add one or more security realm to your configuration. Note When you add security realms to your configuration, Data Grid Server automatically enables the matching authentication mechanisms for the Hot Rod and REST endpoints. Prerequisites Add socket bindings to your Data Grid Server configuration as required. Create keystores, or have a PEM file, to configure the security realm with TLS/SSL encryption. Data Grid Server can also generate keystores at startup. Provision the resources or services that the security realm configuration relies on. For example, if you add a token realm, you need to provision OAuth services. This procedure demonstrates how to configure multiple property realms. Before you begin, you need to create properties files that add users and assign permissions with the Command Line Interface (CLI). Use the user create commands as follows: Tip Run user create --help for examples and more information. Note Adding credentials to a properties realm with the CLI creates the user only on the server instance to which you are connected. You must manually synchronize credentials in a properties realm to each node in the cluster. Procedure Open your Data Grid Server configuration for editing. Use the security-realms element in the security configuration to contain create multiple security realms. Add a security realm with the security-realm element and give it a unique name with the name attribute. To follow the example, create one security realm named application-realm and another named management-realm . Provide the TLS/SSL identify for Data Grid Server with the server-identities element and configure a keystore as required. Specify the type of security realm by adding one the following elements or fields: properties-realm ldap-realm token-realm truststore-realm Specify properties for the type of security realm you are configuring as appropriate. To follow the example, specify the *.properties files you created with the CLI using the path attribute on the user-properties and group-properties elements or fields. If you add multiple different types of security realm to your configuration, include the distributed-realm element or field so that Data Grid Server uses the realms in combination with each other. Configure Data Grid Server endpoints to use the security realm with the with the security-realm attribute. Save the changes to your configuration. Multiple property realms XML <server xmlns="urn:infinispan:server:15.0"> <security> <security-realms> <security-realm name="application-realm"> <properties-realm groups-attribute="Roles"> <user-properties path="application-users.properties"/> <group-properties path="application-groups.properties"/> </properties-realm> </security-realm> <security-realm name="management-realm"> <properties-realm groups-attribute="Roles"> <user-properties path="management-users.properties"/> <group-properties path="management-groups.properties"/> </properties-realm> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "management-realm", "properties-realm": { "groups-attribute": "Roles", "user-properties": { "digest-realm-name": "management-realm", "path": "management-users.properties" }, "group-properties": { "path": "management-groups.properties" } } }, { "name": "application-realm", "properties-realm": { "groups-attribute": "Roles", "user-properties": { "digest-realm-name": "application-realm", "path": "application-users.properties" }, "group-properties": { "path": "application-groups.properties" } } }] } } } YAML server: security: securityRealms: - name: "management-realm" propertiesRealm: groupsAttribute: "Roles" userProperties: digestRealmName: "management-realm" path: "management-users.properties" groupProperties: path: "management-groups.properties" - name: "application-realm" propertiesRealm: groupsAttribute: "Roles" userProperties: digestRealmName: "application-realm" path: "application-users.properties" groupProperties: path: "application-groups.properties" 5.2. Setting up Kerberos identities Add Kerberos identities to a security realm in your Data Grid Server configuration to use keytab files that contain service principal names and encrypted keys, derived from Kerberos passwords. Prerequisites Have Kerberos service account principals. Note keytab files can contain both user and service account principals. However, Data Grid Server uses service account principals only which means it can provide identity to clients and allow clients to authenticate with Kerberos servers. In most cases, you create unique principals for the Hot Rod and REST endpoints. For example, if you have a "datagrid" server in the "INFINISPAN.ORG" domain you should create the following service principals: hotrod/[email protected] identifies the Hot Rod service. HTTP/[email protected] identifies the REST service. Procedure Create keytab files for the Hot Rod and REST services. Linux Microsoft Windows Copy the keytab files to the server/conf directory of your Data Grid Server installation. Open your Data Grid Server configuration for editing. Add a server-identities definition to the Data Grid server security realm. Specify the location of keytab files that provide service principals to Hot Rod and REST connectors. Name the Kerberos service principals. Save the changes to your configuration. Kerberos identity configuration XML <server xmlns="urn:infinispan:server:15.0"> <security> <security-realms> <security-realm name="kerberos-realm"> <server-identities> <!-- Specifies a keytab file that provides a Kerberos identity. --> <!-- Names the Kerberos service principal for the Hot Rod endpoint. --> <!-- The required="true" attribute specifies that the keytab file must be present when the server starts. --> <kerberos keytab-path="hotrod.keytab" principal="hotrod/[email protected]" required="true"/> <!-- Specifies a keytab file and names the Kerberos service principal for the REST endpoint. --> <kerberos keytab-path="http.keytab" principal="HTTP/[email protected]" required="true"/> </server-identities> </security-realm> </security-realms> </security> <endpoints> <endpoint socket-binding="default" security-realm="kerberos-realm"> <hotrod-connector> <authentication> <sasl server-name="datagrid" server-principal="hotrod/[email protected]"/> </authentication> </hotrod-connector> <rest-connector> <authentication server-principal="HTTP/[email protected]"/> </rest-connector> </endpoint> </endpoints> </server> JSON { "server": { "security": { "security-realms": [{ "name": "kerberos-realm", "server-identities": [{ "kerberos": { "principal": "hotrod/[email protected]", "keytab-path": "hotrod.keytab", "required": true }, "kerberos": { "principal": "HTTP/[email protected]", "keytab-path": "http.keytab", "required": true } }] }] }, "endpoints": { "endpoint": { "socket-binding": "default", "security-realm": "kerberos-realm", "hotrod-connector": { "authentication": { "security-realm": "kerberos-realm", "sasl": { "server-name": "datagrid", "server-principal": "hotrod/[email protected]" } } }, "rest-connector": { "authentication": { "server-principal": "HTTP/[email protected]" } } } } } } YAML server: security: securityRealms: - name: "kerberos-realm" serverIdentities: - kerberos: principal: "hotrod/[email protected]" keytabPath: "hotrod.keytab" required: "true" - kerberos: principal: "HTTP/[email protected]" keytabPath: "http.keytab" required: "true" endpoints: endpoint: socketBinding: "default" securityRealm: "kerberos-realm" hotrodConnector: authentication: sasl: serverName: "datagrid" serverPrincipal: "hotrod/[email protected]" restConnector: authentication: securityRealm: "kerberos-realm" serverPrincipal" : "HTTP/[email protected]" 5.3. Property realms Property realms use property files to define users and groups. users.properties contains Data Grid user credentials. Passwords can be pre-digested with the DIGEST-MD5 and DIGEST authentication mechanisms. groups.properties associates users with roles and permissions. users.properties groups.properties Property realm configuration XML <server xmlns="urn:infinispan:server:15.0"> <security> <security-realms> <security-realm name="default"> <!-- groups-attribute configures the "groups.properties" file to contain security authorization roles. --> <properties-realm groups-attribute="Roles"> <user-properties path="users.properties" relative-to="infinispan.server.config.path" plain-text="true"/> <group-properties path="groups.properties" relative-to="infinispan.server.config.path"/> </properties-realm> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "default", "properties-realm": { "groups-attribute": "Roles", "user-properties": { "digest-realm-name": "default", "path": "users.properties", "relative-to": "infinispan.server.config.path", "plain-text": true }, "group-properties": { "path": "groups.properties", "relative-to": "infinispan.server.config.path" } } }] } } } YAML server: security: securityRealms: - name: "default" propertiesRealm: # groupsAttribute configures the "groups.properties" file # to contain security authorization roles. groupsAttribute: "Roles" userProperties: digestRealmName: "default" path: "users.properties" relative-to: 'infinispan.server.config.path' plainText: "true" groupProperties: path: "groups.properties" relative-to: 'infinispan.server.config.path' 5.3.1. Property realm file structure User properties files are structured as follows: users.properties structure The first three lines are special comments that define the name of the realm ( USDREALM_NAME ), whether the passwords are stored in clear or encrypted format ( USDALGORITHM ) and the timestamp of the last update. User credentials are stored in traditional key/value format: the key corresponds to the username and the value corresponds to the password. Encrypted passwords are represented as semi-colon-separated algorithm/hash pairs, with the hash encoded in Base64. Credentials are encrypted using the realm name. Changing a realm's name requires re-encrypting all the passwords. Use the Data Grid CLI to enter the correct security realm name to the file. 5.4. LDAP realms LDAP realms connect to LDAP servers, such as OpenLDAP, Red Hat Directory Server, Apache Directory Server, or Microsoft Active Directory, to authenticate users and obtain membership information. Note LDAP servers can have different entry layouts, depending on the type of server and deployment. It is beyond the scope of this document to provide examples for all possible configurations. 5.4.1. LDAP connection properties Specify the LDAP connection properties in the LDAP realm configuration. The following properties are required: url Specifies the URL of the LDAP server. The URL should be in format ldap://hostname:port or ldaps://hostname:port for secure connections using TLS. principal Specifies a distinguished name (DN) of a valid user in the LDAp server. The DN uniquely identifies the user within the LDAP directory structure. credential Corresponds to the password associated with the principal mentioned above. Important The principal for LDAP connections must have necessary privileges to perform LDAP queries and access specific attributes. Tip Enabling connection-pooling significantly improves the performance of authentication to LDAP servers. The connection pooling mechanism is provided by the JDK. For more information see Connection Pooling Configuration and Java Tutorials: Pooling . 5.4.2. LDAP realm user authentication methods Configure the user authentication method in the LDAP realm. The LDAP realm can authenticate users in two ways: Hashed password comparison by comparing the hashed password stored in a user's password attribute (usually userPassword ) Direct verification by authenticating against the LDAP server using the supplied credentials Direct verification is the only approach that works with Active Directory, because access to the password attribute is forbidden. Important You cannot use endpoint authentication mechanisms that performs hashing with the direct-verification attribute, since this method requires having the password in clear text. As a result you must use the BASIC authentication mechanism with the REST endpoint and PLAIN with the Hot Rod endpoint to integrate with Active Directory Server. A more secure alternative is to use Kerberos, which allows the SPNEGO , GSSAPI , and GS2-KRB5 authentication mechanisms. The LDAP realm searches the directory to find the entry which corresponds to the authenticated user. The rdn-identifier attribute specifies an LDAP attribute that finds the user entry based on a provided identifier, which is typically a username; for example, the uid or sAMAccountName attribute. Add search-recursive="true" to the configuration to search the directory recursively. By default, the search for the user entry uses the (rdn_identifier={0}) filter. You can specify a different filter using the filter-name attribute. 5.4.3. Mapping user entries to their associated groups In the LDAP realm configuration, specify the attribute-mapping element to retrieve and associate all groups that a user is a member of. The membership information is stored typically in two ways: Under group entries that usually have class groupOfNames or groupOfUniqueNames in the member attribute. This is the default behavior in most LDAP installations, except for Active Directory. In this case, you can use an attribute filter. This filter searches for entries that match the supplied filter, which locates groups with a member attribute equal to the user's DN. The filter then extracts the group entry's CN as specified by from , and adds it to the user's Roles . In the user entry in the memberOf attribute. This is typically the case for Active Directory. In this case you should use an attribute reference such as the following: <attribute-reference reference="memberOf" from="cn" to="Roles" /> This reference gets all memberOf attributes from the user's entry, extracts the CN as specified by from , and adds them to the user's groups ( Roles is the internal name used to map the groups). 5.4.4. LDAP realm configuration reference XML <server xmlns="urn:infinispan:server:15.0"> <security> <security-realms> <security-realm name="ldap-realm"> <!-- Specifies connection properties. --> <ldap-realm url="ldap://my-ldap-server:10389" principal="uid=admin,ou=People,dc=infinispan,dc=org" credential="strongPassword" connection-timeout="3000" read-timeout="30000" connection-pooling="true" referral-mode="ignore" page-size="30" direct-verification="true"> <!-- Defines how principals are mapped to LDAP entries. --> <identity-mapping rdn-identifier="uid" search-dn="ou=People,dc=infinispan,dc=org" search-recursive="false"> <!-- Retrieves all the groups of which the user is a member. --> <attribute-mapping> <attribute from="cn" to="Roles" filter="(&(objectClass=groupOfNames)(member={1}))" filter-dn="ou=Roles,dc=infinispan,dc=org"/> </attribute-mapping> </identity-mapping> </ldap-realm> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "ldap-realm", "ldap-realm": { "url": "ldap://my-ldap-server:10389", "principal": "uid=admin,ou=People,dc=infinispan,dc=org", "credential": "strongPassword", "connection-timeout": "3000", "read-timeout": "30000", "connection-pooling": "true", "referral-mode": "ignore", "page-size": "30", "direct-verification": "true", "identity-mapping": { "rdn-identifier": "uid", "search-dn": "ou=People,dc=infinispan,dc=org", "search-recursive": "false", "attribute-mapping": [{ "from": "cn", "to": "Roles", "filter": "(&(objectClass=groupOfNames)(member={1}))", "filter-dn": "ou=Roles,dc=infinispan,dc=org" }] } } }] } } } YAML server: security: securityRealms: - name: ldap-realm ldapRealm: url: 'ldap://my-ldap-server:10389' principal: 'uid=admin,ou=People,dc=infinispan,dc=org' credential: strongPassword connectionTimeout: '3000' readTimeout: '30000' connectionPooling: true referralMode: ignore pageSize: '30' directVerification: true identityMapping: rdnIdentifier: uid searchDn: 'ou=People,dc=infinispan,dc=org' searchRecursive: false attributeMapping: - filter: '(&(objectClass=groupOfNames)(member={1}))' filterDn: 'ou=Roles,dc=infinispan,dc=org' from: cn to: Roles 5.4.4.1. LDAP realm principal rewriting Principals obtained by SASL authentication mechanisms such as GSSAPI , GS2-KRB5 and Negotiate usually include the domain name, for example [email protected] . Before using these principals in LDAP queries, it is necessary to transform them to ensure their compatibility. This process is called rewriting. Data Grid includes the following transformers: case-principal-transformer rewrites the principal to either all uppercase or all lowercase. For example MyUser would be rewritten as MYUSER in uppercase mode and myuser in lowercase mode. common-name-principal-transformer rewrites principals in the LDAP Distinguished Name format (as defined by RFC 4514 ). It extracts the first attribute of type CN (commonName). For example, DN=CN=myuser,OU=myorg,DC=mydomain would be rewritten as myuser . regex-principal-transformer rewrites principals using a regular expression with capturing groups, allowing, for example, for extractions of any substring. 5.4.4.2. LDAP principal rewriting configuration reference Case principal transformer XML <server xmlns="urn:infinispan:server:15.0"> <security> <security-realms> <security-realm name="ldap-realm"> <ldap-realm url="ldap://USD{org.infinispan.test.host.address}:10389" principal="uid=admin,ou=People,dc=infinispan,dc=org" credential="strongPassword"> <name-rewriter> <!-- Defines a rewriter that transforms usernames to lowercase --> <case-principal-transformer uppercase="false"/> </name-rewriter> <!-- further configuration omitted --> </ldap-realm> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "ldap-realm", "ldap-realm": { "principal": "uid=admin,ou=People,dc=infinispan,dc=org", "url": "ldap://USD{org.infinispan.test.host.address}:10389", "credential": "strongPassword", "name-rewriter": { "case-principal-transformer": { "uppercase": false } } } }] } } } YAML server: security: securityRealms: - name: "ldap-realm" ldapRealm: principal: "uid=admin,ou=People,dc=infinispan,dc=org" url: "ldap://USD{org.infinispan.test.host.address}:10389" credential: "strongPassword" nameRewriter: casePrincipalTransformer: uppercase: false # further configuration omitted Common name principal transformer XML <server xmlns="urn:infinispan:server:15.0"> <security> <security-realms> <security-realm name="ldap-realm"> <ldap-realm url="ldap://USD{org.infinispan.test.host.address}:10389" principal="uid=admin,ou=People,dc=infinispan,dc=org" credential="strongPassword"> <name-rewriter> <!-- Defines a rewriter that obtains the first CN from a DN --> <common-name-principal-transformer /> </name-rewriter> <!-- further configuration omitted --> </ldap-realm> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "ldap-realm", "ldap-realm": { "principal": "uid=admin,ou=People,dc=infinispan,dc=org", "url": "ldap://USD{org.infinispan.test.host.address}:10389", "credential": "strongPassword", "name-rewriter": { "common-name-principal-transformer": {} } } }] } } } YAML server: security: securityRealms: - name: "ldap-realm" ldapRealm: principal: "uid=admin,ou=People,dc=infinispan,dc=org" url: "ldap://USD{org.infinispan.test.host.address}:10389" credential: "strongPassword" nameRewriter: commonNamePrincipalTransformer: ~ # further configuration omitted Regex principal transformer XML <server xmlns="urn:infinispan:server:15.0"> <security> <security-realms> <security-realm name="ldap-realm"> <ldap-realm url="ldap://USD{org.infinispan.test.host.address}:10389" principal="uid=admin,ou=People,dc=infinispan,dc=org" credential="strongPassword"> <name-rewriter> <!-- Defines a rewriter that extracts the username from the principal using a regular expression. --> <regex-principal-transformer pattern="(.*)@INFINISPAN\.ORG" replacement="USD1"/> </name-rewriter> <!-- further configuration omitted --> </ldap-realm> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "ldap-realm", "ldap-realm": { "principal": "uid=admin,ou=People,dc=infinispan,dc=org", "url": "ldap://USD{org.infinispan.test.host.address}:10389", "credential": "strongPassword", "name-rewriter": { "regex-principal-transformer": { "pattern": "(.*)@INFINISPAN\\.ORG", "replacement": "USD1" } } } }] } } } YAML server: security: securityRealms: - name: "ldap-realm" ldapRealm: principal: "uid=admin,ou=People,dc=infinispan,dc=org" url: "ldap://USD{org.infinispan.test.host.address}:10389" credential: "strongPassword" nameRewriter: regexPrincipalTransformer: pattern: (.*)@INFINISPAN\.ORG replacement: "USD1" # further configuration omitted 5.4.4.3. LDAP user and group mapping process with Data Grid This example illustrates the process of loading and internally mapping LDAP users and groups to Data Grid subjects. The following is a LDIF (LDAP Data Interchange Format) file, which describes multiple LDAP entries: LDIF # Users dn: uid=root,ou=People,dc=infinispan,dc=org objectclass: top objectclass: uidObject objectclass: person uid: root cn: root sn: root userPassword: strongPassword # Groups dn: cn=admin,ou=Roles,dc=infinispan,dc=org objectClass: top objectClass: groupOfNames cn: admin description: the Infinispan admin group member: uid=root,ou=People,dc=infinispan,dc=org dn: cn=monitor,ou=Roles,dc=infinispan,dc=org objectClass: top objectClass: groupOfNames cn: monitor description: the Infinispan monitor group member: uid=root,ou=People,dc=infinispan,dc=org The root user is a member of the admin and monitor groups. When a request to authenticate the user root with the password strongPassword is made on one of the endpoints, the following operations are performed: The username is optionally rewritten using the chosen principal transformer. The realm searches within the ou=People,dc=infinispan,dc=org tree for an entry whose uid attribute is equal to root and finds the entry with DN uid=root,ou=People,dc=infinispan,dc=org , which becomes the user principal. The realm searches within the u=Roles,dc=infinispan,dc=org tree for entries of objectClass=groupOfNames that include uid=root,ou=People,dc=infinispan,dc=org in the member attribute. In this case it finds two entries: cn=admin,ou=Roles,dc=infinispan,dc=org and cn=monitor,ou=Roles,dc=infinispan,dc=org . From these entries, it extracts the cn attributes which become the group principals. The resulting subject will therefore look like: NamePrincipal: uid=root,ou=People,dc=infinispan,dc=org RolePrincipal: admin RolePrincipal: monitor At this point, the global authorization mappers are applied on the above subject to convert the principals into roles. The roles are then expanded into a set of permissions, which are validated against the requested cache and operation. 5.5. Token realms Token realms use external services to validate tokens and require providers that are compatible with RFC-7662 (OAuth2 Token Introspection), such as Red Hat SSO. Token realm configuration XML <server xmlns="urn:infinispan:server:15.0"> <security> <security-realms> <security-realm name="token-realm"> <!-- Specifies the URL of the authentication server. --> <token-realm name="token" auth-server-url="https://oauth-server/auth/"> <!-- Specifies the URL of the token introspection endpoint. --> <oauth2-introspection introspection-url="https://oauth-server/auth/realms/infinispan/protocol/openid-connect/token/introspect" client-id="infinispan-server" client-secret="1fdca4ec-c416-47e0-867a-3d471af7050f"/> </token-realm> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "token-realm", "token-realm": { "auth-server-url": "https://oauth-server/auth/", "oauth2-introspection": { "client-id": "infinispan-server", "client-secret": "1fdca4ec-c416-47e0-867a-3d471af7050f", "introspection-url": "https://oauth-server/auth/realms/infinispan/protocol/openid-connect/token/introspect" } } }] } } } YAML server: security: securityRealms: - name: token-realm tokenRealm: authServerUrl: 'https://oauth-server/auth/' oauth2Introspection: clientId: infinispan-server clientSecret: '1fdca4ec-c416-47e0-867a-3d471af7050f' introspectionUrl: 'https://oauth-server/auth/realms/infinispan/protocol/openid-connect/token/introspect' 5.6. Trust store realms Trust store realms use certificates, or certificates chains, that verify Data Grid Server and client identities when they negotiate connections. Keystores Contain server certificates that provide a Data Grid Server identity to clients. If you configure a keystore with server certificates, Data Grid Server encrypts traffic using industry standard SSL/TLS protocols. Trust stores Contain client certificates, or certificate chains, that clients present to Data Grid Server. Client trust stores are optional and allow Data Grid Server to perform client certificate authentication. Client certificate authentication You must add the require-ssl-client-auth="true" attribute to the endpoint configuration if you want Data Grid Server to validate or authenticate client certificates. Trust store realm configuration XML <server xmlns="urn:infinispan:server:15.0"> <security> <security-realms> <security-realm name="trust-store-realm"> <server-identities> <ssl> <!-- Provides an SSL/TLS identity with a keystore that contains server certificates. --> <keystore path="server.p12" relative-to="infinispan.server.config.path" keystore-password="secret" alias="server"/> <!-- Configures a trust store that contains client certificates or part of a certificate chain. --> <truststore path="trust.p12" relative-to="infinispan.server.config.path" password="secret"/> </ssl> </server-identities> <!-- Authenticates client certificates against the trust store. If you configure this, the trust store must contain the public certificates for all clients. --> <truststore-realm/> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "trust-store-realm", "server-identities": { "ssl": { "keystore": { "path": "server.p12", "relative-to": "infinispan.server.config.path", "keystore-password": "secret", "alias": "server" }, "truststore": { "path": "trust.p12", "relative-to": "infinispan.server.config.path", "password": "secret" } } }, "truststore-realm": {} }] } } } YAML server: security: securityRealms: - name: "trust-store-realm" serverIdentities: ssl: keystore: path: "server.p12" relative-to: "infinispan.server.config.path" keystore-password: "secret" alias: "server" truststore: path: "trust.p12" relative-to: "infinispan.server.config.path" password: "secret" truststoreRealm: ~ 5.7. Distributed security realms Distributed realms combine multiple different types of security realms. When users attempt to access the Hot Rod or REST endpoints, Data Grid Server uses each security realm in turn until it finds one that can perform the authentication. Distributed realm configuration XML <server xmlns="urn:infinispan:server:15.0"> <security> <security-realms> <security-realm name="distributed-realm"> <ldap-realm url="ldap://my-ldap-server:10389" principal="uid=admin,ou=People,dc=infinispan,dc=org" credential="strongPassword"> <identity-mapping rdn-identifier="uid" search-dn="ou=People,dc=infinispan,dc=org" search-recursive="false"> <attribute-mapping> <attribute from="cn" to="Roles" filter="(&(objectClass=groupOfNames)(member={1}))" filter-dn="ou=Roles,dc=infinispan,dc=org"/> </attribute-mapping> </identity-mapping> </ldap-realm> <properties-realm groups-attribute="Roles"> <user-properties path="users.properties" relative-to="infinispan.server.config.path"/> <group-properties path="groups.properties" relative-to="infinispan.server.config.path"/> </properties-realm> <distributed-realm/> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "distributed-realm", "ldap-realm": { "principal": "uid=admin,ou=People,dc=infinispan,dc=org", "url": "ldap://my-ldap-server:10389", "credential": "strongPassword", "identity-mapping": { "rdn-identifier": "uid", "search-dn": "ou=People,dc=infinispan,dc=org", "search-recursive": false, "attribute-mapping": { "attribute": { "filter": "(&(objectClass=groupOfNames)(member={1}))", "filter-dn": "ou=Roles,dc=infinispan,dc=org", "from": "cn", "to": "Roles" } } } }, "properties-realm": { "groups-attribute": "Roles", "user-properties": { "digest-realm-name": "distributed-realm", "path": "users.properties" }, "group-properties": { "path": "groups.properties" } }, "distributed-realm": {} }] } } } YAML server: security: securityRealms: - name: "distributed-realm" ldapRealm: principal: "uid=admin,ou=People,dc=infinispan,dc=org" url: "ldap://my-ldap-server:10389" credential: "strongPassword" identityMapping: rdnIdentifier: "uid" searchDn: "ou=People,dc=infinispan,dc=org" searchRecursive: "false" attributeMapping: attribute: filter: "(&(objectClass=groupOfNames)(member={1}))" filterDn: "ou=Roles,dc=infinispan,dc=org" from: "cn" to: "Roles" propertiesRealm: groupsAttribute: "Roles" userProperties: digestRealmName: "distributed-realm" path: "users.properties" groupProperties: path: "groups.properties" distributedRealm: ~ 5.8. Aggregate security realms Aggregate realms combine multiple realms: the first one for the authentication steps and the others for loading the identity for the authorization steps. For example, this can be used to authenticate users via a client certificate, and retrieve identity from a properties or LDAP realm. Aggregate realm configuration XML <server xmlns="urn:infinispan:server:15.0"> <security> <security-realms> <security-realm name="default" default-realm="aggregate"> <server-identities> <ssl> <keystore path="server.pfx" password="secret" alias="server"/> <truststore path="trust.pfx" password="secret"/> </ssl> </server-identities> <properties-realm name="properties" groups-attribute="Roles"> <user-properties path="users.properties" relative-to="infinispan.server.config.path"/> <group-properties path="groups.properties" relative-to="infinispan.server.config.path"/> </properties-realm> <truststore-realm name="trust"/> <aggregate-realm authentication-realm="trust" authorization-realms="properties"> <name-rewriter> <common-name-principal-transformer/> </name-rewriter> </aggregate-realm> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [ { "name": "aggregate-realm", "default-realm": "aggregate", "server-identities": { "ssl": { "keystore": { "path": "server.p12", "relative-to": "infinispan.server.config.path", "keystore-password": "secret", "alias": "server" }, "truststore": { "path": "trust.p12", "relative-to": "infinispan.server.config.path", "password": "secret" } } }, "properties-realm": { "name": "properties", "groups-attribute": "Roles", "user-properties": { "digest-realm-name": "distributed-realm", "path": "users.properties" }, "group-properties": { "path": "groups.properties" } }, "truststore-realm": { "name": "trust" }, "aggregate-realm": { "authentication-realm": "trust", "authorization-realms": ["properties"], "name-rewriter": { "common-name-principal-transformer": {} } } } ] } } } YAML server: security: securityRealms: - name: "aggregate-realm" defaultRealm: "aggregate" serverIdentities: ssl: keystore: path: "server.p12" relative-to: "infinispan.server.config.path" keystore-password: "secret" alias: "server" truststore: path: "trust.p12" relative-to: "infinispan.server.config.path" password: "secret" truststoreRealm: name: "trust" propertiesRealm: name: "properties" groupsAttribute: "Roles" userProperties: digestRealmName: "distributed-realm" path: "users.properties" groupProperties: path: "groups.properties" aggregateRealm: authenticationRealm: "trust" authorizationRealms: - "properties" nameRewriter: common-name-principal-transformer: ~ 5.8.1. Name rewriters Principal names may have different forms, depending on the security realm type: Properties and Token realms may return simple strings Trust and LDAP realms may return X.500-style distinguished names Kerberos realms may return user@domain -style names Names must be normalized to a common form when using the aggregate realm using one of the following transformers. 5.8.1.1. Case Principal Transformer The case-principal-transformer transforms a name to all uppercase or all lowercase letters. XML <aggregate-realm authentication-realm="trust" authorization-realms="properties"> <name-rewriter> <case-principal-transformer uppercase="false"/> </name-rewriter> </aggregate-realm> JSON { "aggregate-realm": { "authentication-realm": "trust", "authorization-realms": [ "properties" ], "name-rewriter": { "case-principal-transformer": { "uppercase": "false" } } } } YAML aggregateRealm: authenticationRealm: "trust" authorizationRealms: - "properties" nameRewriter: casePrincipalTransformer: uppercase: false 5.8.1.2. Common Name Principal Transformer The common-name-principal-transformer extracts the first CN element from a DN used by LDAP or Certificates. For example, given a principal in the form CN=app1,CN=serviceA,OU=applications,DC=infinispan,DC=org , the following configuration will extract app1 as the principal. XML <aggregate-realm authentication-realm="trust" authorization-realms="properties"> <name-rewriter> <common-name-principal-transformer/> </name-rewriter> </aggregate-realm> JSON { "aggregate-realm": { "authentication-realm": "trust", "authorization-realms": [ "properties" ], "name-rewriter": { "common-name-principal-transformer": {} } } } YAML aggregateRealm: authenticationRealm: "trust" authorizationRealms: - "properties" nameRewriter: commonNamePrincipalTransformer: ~ 5.8.1.3. Regex Principal Transformer The regex-principal-transformer can perform find and replace using a regular expression. The example shows how to extract the local-part from a [email protected] identifier. XML <aggregate-realm authentication-realm="trust" authorization-realms="properties"> <name-rewriter> <regex-principal-transformer pattern="([^@]+)@.*" replacement="USD1" replace-all="false"/> </name-rewriter> </aggregate-realm> JSON { "aggregate-realm": { "authentication-realm": "trust", "authorization-realms": [ "properties" ], "name-rewriter": { "regex-principal-transformer": { "pattern" : "([^@]+)@.*", "replacement": "USD1", "replace-all": false } } } } YAML aggregateRealm: authenticationRealm: "trust" authorizationRealms: - "properties" nameRewriter: regexPrincipalTransformer: pattern: "([^@]+)@.*" replacement: "USD1" replaceAll: false 5.9. Security realm caching Security realms implement caching to avoid having to repeatedly retrieve data which usually changes very infrequently. By default Realm caching realm configuration XML <server xmlns="urn:infinispan:server:15.0"> <security> <security-realms> <security-realm name="default" cache-max-size="1024" cache-lifespan="120000"> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "default", "cache-max-size": 1024, "cache-lifespan": 120000 }] } } } YAML server: security: securityRealms: - name: "default" cache-max-size: 1024 cache-lifespan: 120000 5.9.1. Flushing realm caches Use the CLI to flush security realm caches across the whole cluster. | [
"user create <username> -p <changeme> -g <role> --users-file=application-users.properties --groups-file=application-groups.properties user create <username> -p <changeme> -g <role> --users-file=management-users.properties --groups-file=management-groups.properties",
"<server xmlns=\"urn:infinispan:server:15.0\"> <security> <security-realms> <security-realm name=\"application-realm\"> <properties-realm groups-attribute=\"Roles\"> <user-properties path=\"application-users.properties\"/> <group-properties path=\"application-groups.properties\"/> </properties-realm> </security-realm> <security-realm name=\"management-realm\"> <properties-realm groups-attribute=\"Roles\"> <user-properties path=\"management-users.properties\"/> <group-properties path=\"management-groups.properties\"/> </properties-realm> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"management-realm\", \"properties-realm\": { \"groups-attribute\": \"Roles\", \"user-properties\": { \"digest-realm-name\": \"management-realm\", \"path\": \"management-users.properties\" }, \"group-properties\": { \"path\": \"management-groups.properties\" } } }, { \"name\": \"application-realm\", \"properties-realm\": { \"groups-attribute\": \"Roles\", \"user-properties\": { \"digest-realm-name\": \"application-realm\", \"path\": \"application-users.properties\" }, \"group-properties\": { \"path\": \"application-groups.properties\" } } }] } } }",
"server: security: securityRealms: - name: \"management-realm\" propertiesRealm: groupsAttribute: \"Roles\" userProperties: digestRealmName: \"management-realm\" path: \"management-users.properties\" groupProperties: path: \"management-groups.properties\" - name: \"application-realm\" propertiesRealm: groupsAttribute: \"Roles\" userProperties: digestRealmName: \"application-realm\" path: \"application-users.properties\" groupProperties: path: \"application-groups.properties\"",
"ktutil ktutil: addent -password -p [email protected] -k 1 -e aes256-cts Password for [email protected]: [enter your password] ktutil: wkt http.keytab ktutil: quit",
"ktpass -princ HTTP/[email protected] -pass * -mapuser INFINISPAN\\USER_NAME ktab -k http.keytab -a HTTP/[email protected]",
"<server xmlns=\"urn:infinispan:server:15.0\"> <security> <security-realms> <security-realm name=\"kerberos-realm\"> <server-identities> <!-- Specifies a keytab file that provides a Kerberos identity. --> <!-- Names the Kerberos service principal for the Hot Rod endpoint. --> <!-- The required=\"true\" attribute specifies that the keytab file must be present when the server starts. --> <kerberos keytab-path=\"hotrod.keytab\" principal=\"hotrod/[email protected]\" required=\"true\"/> <!-- Specifies a keytab file and names the Kerberos service principal for the REST endpoint. --> <kerberos keytab-path=\"http.keytab\" principal=\"HTTP/[email protected]\" required=\"true\"/> </server-identities> </security-realm> </security-realms> </security> <endpoints> <endpoint socket-binding=\"default\" security-realm=\"kerberos-realm\"> <hotrod-connector> <authentication> <sasl server-name=\"datagrid\" server-principal=\"hotrod/[email protected]\"/> </authentication> </hotrod-connector> <rest-connector> <authentication server-principal=\"HTTP/[email protected]\"/> </rest-connector> </endpoint> </endpoints> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"kerberos-realm\", \"server-identities\": [{ \"kerberos\": { \"principal\": \"hotrod/[email protected]\", \"keytab-path\": \"hotrod.keytab\", \"required\": true }, \"kerberos\": { \"principal\": \"HTTP/[email protected]\", \"keytab-path\": \"http.keytab\", \"required\": true } }] }] }, \"endpoints\": { \"endpoint\": { \"socket-binding\": \"default\", \"security-realm\": \"kerberos-realm\", \"hotrod-connector\": { \"authentication\": { \"security-realm\": \"kerberos-realm\", \"sasl\": { \"server-name\": \"datagrid\", \"server-principal\": \"hotrod/[email protected]\" } } }, \"rest-connector\": { \"authentication\": { \"server-principal\": \"HTTP/[email protected]\" } } } } } }",
"server: security: securityRealms: - name: \"kerberos-realm\" serverIdentities: - kerberos: principal: \"hotrod/[email protected]\" keytabPath: \"hotrod.keytab\" required: \"true\" - kerberos: principal: \"HTTP/[email protected]\" keytabPath: \"http.keytab\" required: \"true\" endpoints: endpoint: socketBinding: \"default\" securityRealm: \"kerberos-realm\" hotrodConnector: authentication: sasl: serverName: \"datagrid\" serverPrincipal: \"hotrod/[email protected]\" restConnector: authentication: securityRealm: \"kerberos-realm\" serverPrincipal\" : \"HTTP/[email protected]\"",
"myuser=a_password user2=another_password",
"myuser=supervisor,reader,writer user2=supervisor",
"<server xmlns=\"urn:infinispan:server:15.0\"> <security> <security-realms> <security-realm name=\"default\"> <!-- groups-attribute configures the \"groups.properties\" file to contain security authorization roles. --> <properties-realm groups-attribute=\"Roles\"> <user-properties path=\"users.properties\" relative-to=\"infinispan.server.config.path\" plain-text=\"true\"/> <group-properties path=\"groups.properties\" relative-to=\"infinispan.server.config.path\"/> </properties-realm> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"default\", \"properties-realm\": { \"groups-attribute\": \"Roles\", \"user-properties\": { \"digest-realm-name\": \"default\", \"path\": \"users.properties\", \"relative-to\": \"infinispan.server.config.path\", \"plain-text\": true }, \"group-properties\": { \"path\": \"groups.properties\", \"relative-to\": \"infinispan.server.config.path\" } } }] } } }",
"server: security: securityRealms: - name: \"default\" propertiesRealm: # groupsAttribute configures the \"groups.properties\" file # to contain security authorization roles. groupsAttribute: \"Roles\" userProperties: digestRealmName: \"default\" path: \"users.properties\" relative-to: 'infinispan.server.config.path' plainText: \"true\" groupProperties: path: \"groups.properties\" relative-to: 'infinispan.server.config.path'",
"#USDREALM_NAME=defaultUSD #USDALGORITHM=encryptedUSD #Wed Jul 31 08:32:08 CEST 2024 admin=algorithm-1\\:hash-1;algorithm-2\\:hash-2;",
"<server xmlns=\"urn:infinispan:server:15.0\"> <security> <security-realms> <security-realm name=\"ldap-realm\"> <!-- Specifies connection properties. --> <ldap-realm url=\"ldap://my-ldap-server:10389\" principal=\"uid=admin,ou=People,dc=infinispan,dc=org\" credential=\"strongPassword\" connection-timeout=\"3000\" read-timeout=\"30000\" connection-pooling=\"true\" referral-mode=\"ignore\" page-size=\"30\" direct-verification=\"true\"> <!-- Defines how principals are mapped to LDAP entries. --> <identity-mapping rdn-identifier=\"uid\" search-dn=\"ou=People,dc=infinispan,dc=org\" search-recursive=\"false\"> <!-- Retrieves all the groups of which the user is a member. --> <attribute-mapping> <attribute from=\"cn\" to=\"Roles\" filter=\"(&(objectClass=groupOfNames)(member={1}))\" filter-dn=\"ou=Roles,dc=infinispan,dc=org\"/> </attribute-mapping> </identity-mapping> </ldap-realm> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"ldap-realm\", \"ldap-realm\": { \"url\": \"ldap://my-ldap-server:10389\", \"principal\": \"uid=admin,ou=People,dc=infinispan,dc=org\", \"credential\": \"strongPassword\", \"connection-timeout\": \"3000\", \"read-timeout\": \"30000\", \"connection-pooling\": \"true\", \"referral-mode\": \"ignore\", \"page-size\": \"30\", \"direct-verification\": \"true\", \"identity-mapping\": { \"rdn-identifier\": \"uid\", \"search-dn\": \"ou=People,dc=infinispan,dc=org\", \"search-recursive\": \"false\", \"attribute-mapping\": [{ \"from\": \"cn\", \"to\": \"Roles\", \"filter\": \"(&(objectClass=groupOfNames)(member={1}))\", \"filter-dn\": \"ou=Roles,dc=infinispan,dc=org\" }] } } }] } } }",
"server: security: securityRealms: - name: ldap-realm ldapRealm: url: 'ldap://my-ldap-server:10389' principal: 'uid=admin,ou=People,dc=infinispan,dc=org' credential: strongPassword connectionTimeout: '3000' readTimeout: '30000' connectionPooling: true referralMode: ignore pageSize: '30' directVerification: true identityMapping: rdnIdentifier: uid searchDn: 'ou=People,dc=infinispan,dc=org' searchRecursive: false attributeMapping: - filter: '(&(objectClass=groupOfNames)(member={1}))' filterDn: 'ou=Roles,dc=infinispan,dc=org' from: cn to: Roles",
"<server xmlns=\"urn:infinispan:server:15.0\"> <security> <security-realms> <security-realm name=\"ldap-realm\"> <ldap-realm url=\"ldap://USD{org.infinispan.test.host.address}:10389\" principal=\"uid=admin,ou=People,dc=infinispan,dc=org\" credential=\"strongPassword\"> <name-rewriter> <!-- Defines a rewriter that transforms usernames to lowercase --> <case-principal-transformer uppercase=\"false\"/> </name-rewriter> <!-- further configuration omitted --> </ldap-realm> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"ldap-realm\", \"ldap-realm\": { \"principal\": \"uid=admin,ou=People,dc=infinispan,dc=org\", \"url\": \"ldap://USD{org.infinispan.test.host.address}:10389\", \"credential\": \"strongPassword\", \"name-rewriter\": { \"case-principal-transformer\": { \"uppercase\": false } } } }] } } }",
"server: security: securityRealms: - name: \"ldap-realm\" ldapRealm: principal: \"uid=admin,ou=People,dc=infinispan,dc=org\" url: \"ldap://USD{org.infinispan.test.host.address}:10389\" credential: \"strongPassword\" nameRewriter: casePrincipalTransformer: uppercase: false # further configuration omitted",
"<server xmlns=\"urn:infinispan:server:15.0\"> <security> <security-realms> <security-realm name=\"ldap-realm\"> <ldap-realm url=\"ldap://USD{org.infinispan.test.host.address}:10389\" principal=\"uid=admin,ou=People,dc=infinispan,dc=org\" credential=\"strongPassword\"> <name-rewriter> <!-- Defines a rewriter that obtains the first CN from a DN --> <common-name-principal-transformer /> </name-rewriter> <!-- further configuration omitted --> </ldap-realm> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"ldap-realm\", \"ldap-realm\": { \"principal\": \"uid=admin,ou=People,dc=infinispan,dc=org\", \"url\": \"ldap://USD{org.infinispan.test.host.address}:10389\", \"credential\": \"strongPassword\", \"name-rewriter\": { \"common-name-principal-transformer\": {} } } }] } } }",
"server: security: securityRealms: - name: \"ldap-realm\" ldapRealm: principal: \"uid=admin,ou=People,dc=infinispan,dc=org\" url: \"ldap://USD{org.infinispan.test.host.address}:10389\" credential: \"strongPassword\" nameRewriter: commonNamePrincipalTransformer: ~ # further configuration omitted",
"<server xmlns=\"urn:infinispan:server:15.0\"> <security> <security-realms> <security-realm name=\"ldap-realm\"> <ldap-realm url=\"ldap://USD{org.infinispan.test.host.address}:10389\" principal=\"uid=admin,ou=People,dc=infinispan,dc=org\" credential=\"strongPassword\"> <name-rewriter> <!-- Defines a rewriter that extracts the username from the principal using a regular expression. --> <regex-principal-transformer pattern=\"(.*)@INFINISPAN\\.ORG\" replacement=\"USD1\"/> </name-rewriter> <!-- further configuration omitted --> </ldap-realm> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"ldap-realm\", \"ldap-realm\": { \"principal\": \"uid=admin,ou=People,dc=infinispan,dc=org\", \"url\": \"ldap://USD{org.infinispan.test.host.address}:10389\", \"credential\": \"strongPassword\", \"name-rewriter\": { \"regex-principal-transformer\": { \"pattern\": \"(.*)@INFINISPAN\\\\.ORG\", \"replacement\": \"USD1\" } } } }] } } }",
"server: security: securityRealms: - name: \"ldap-realm\" ldapRealm: principal: \"uid=admin,ou=People,dc=infinispan,dc=org\" url: \"ldap://USD{org.infinispan.test.host.address}:10389\" credential: \"strongPassword\" nameRewriter: regexPrincipalTransformer: pattern: (.*)@INFINISPAN\\.ORG replacement: \"USD1\" # further configuration omitted",
"Users dn: uid=root,ou=People,dc=infinispan,dc=org objectclass: top objectclass: uidObject objectclass: person uid: root cn: root sn: root userPassword: strongPassword Groups dn: cn=admin,ou=Roles,dc=infinispan,dc=org objectClass: top objectClass: groupOfNames cn: admin description: the Infinispan admin group member: uid=root,ou=People,dc=infinispan,dc=org dn: cn=monitor,ou=Roles,dc=infinispan,dc=org objectClass: top objectClass: groupOfNames cn: monitor description: the Infinispan monitor group member: uid=root,ou=People,dc=infinispan,dc=org",
"<server xmlns=\"urn:infinispan:server:15.0\"> <security> <security-realms> <security-realm name=\"token-realm\"> <!-- Specifies the URL of the authentication server. --> <token-realm name=\"token\" auth-server-url=\"https://oauth-server/auth/\"> <!-- Specifies the URL of the token introspection endpoint. --> <oauth2-introspection introspection-url=\"https://oauth-server/auth/realms/infinispan/protocol/openid-connect/token/introspect\" client-id=\"infinispan-server\" client-secret=\"1fdca4ec-c416-47e0-867a-3d471af7050f\"/> </token-realm> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"token-realm\", \"token-realm\": { \"auth-server-url\": \"https://oauth-server/auth/\", \"oauth2-introspection\": { \"client-id\": \"infinispan-server\", \"client-secret\": \"1fdca4ec-c416-47e0-867a-3d471af7050f\", \"introspection-url\": \"https://oauth-server/auth/realms/infinispan/protocol/openid-connect/token/introspect\" } } }] } } }",
"server: security: securityRealms: - name: token-realm tokenRealm: authServerUrl: 'https://oauth-server/auth/' oauth2Introspection: clientId: infinispan-server clientSecret: '1fdca4ec-c416-47e0-867a-3d471af7050f' introspectionUrl: 'https://oauth-server/auth/realms/infinispan/protocol/openid-connect/token/introspect'",
"<server xmlns=\"urn:infinispan:server:15.0\"> <security> <security-realms> <security-realm name=\"trust-store-realm\"> <server-identities> <ssl> <!-- Provides an SSL/TLS identity with a keystore that contains server certificates. --> <keystore path=\"server.p12\" relative-to=\"infinispan.server.config.path\" keystore-password=\"secret\" alias=\"server\"/> <!-- Configures a trust store that contains client certificates or part of a certificate chain. --> <truststore path=\"trust.p12\" relative-to=\"infinispan.server.config.path\" password=\"secret\"/> </ssl> </server-identities> <!-- Authenticates client certificates against the trust store. If you configure this, the trust store must contain the public certificates for all clients. --> <truststore-realm/> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"trust-store-realm\", \"server-identities\": { \"ssl\": { \"keystore\": { \"path\": \"server.p12\", \"relative-to\": \"infinispan.server.config.path\", \"keystore-password\": \"secret\", \"alias\": \"server\" }, \"truststore\": { \"path\": \"trust.p12\", \"relative-to\": \"infinispan.server.config.path\", \"password\": \"secret\" } } }, \"truststore-realm\": {} }] } } }",
"server: security: securityRealms: - name: \"trust-store-realm\" serverIdentities: ssl: keystore: path: \"server.p12\" relative-to: \"infinispan.server.config.path\" keystore-password: \"secret\" alias: \"server\" truststore: path: \"trust.p12\" relative-to: \"infinispan.server.config.path\" password: \"secret\" truststoreRealm: ~",
"<server xmlns=\"urn:infinispan:server:15.0\"> <security> <security-realms> <security-realm name=\"distributed-realm\"> <ldap-realm url=\"ldap://my-ldap-server:10389\" principal=\"uid=admin,ou=People,dc=infinispan,dc=org\" credential=\"strongPassword\"> <identity-mapping rdn-identifier=\"uid\" search-dn=\"ou=People,dc=infinispan,dc=org\" search-recursive=\"false\"> <attribute-mapping> <attribute from=\"cn\" to=\"Roles\" filter=\"(&(objectClass=groupOfNames)(member={1}))\" filter-dn=\"ou=Roles,dc=infinispan,dc=org\"/> </attribute-mapping> </identity-mapping> </ldap-realm> <properties-realm groups-attribute=\"Roles\"> <user-properties path=\"users.properties\" relative-to=\"infinispan.server.config.path\"/> <group-properties path=\"groups.properties\" relative-to=\"infinispan.server.config.path\"/> </properties-realm> <distributed-realm/> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"distributed-realm\", \"ldap-realm\": { \"principal\": \"uid=admin,ou=People,dc=infinispan,dc=org\", \"url\": \"ldap://my-ldap-server:10389\", \"credential\": \"strongPassword\", \"identity-mapping\": { \"rdn-identifier\": \"uid\", \"search-dn\": \"ou=People,dc=infinispan,dc=org\", \"search-recursive\": false, \"attribute-mapping\": { \"attribute\": { \"filter\": \"(&(objectClass=groupOfNames)(member={1}))\", \"filter-dn\": \"ou=Roles,dc=infinispan,dc=org\", \"from\": \"cn\", \"to\": \"Roles\" } } } }, \"properties-realm\": { \"groups-attribute\": \"Roles\", \"user-properties\": { \"digest-realm-name\": \"distributed-realm\", \"path\": \"users.properties\" }, \"group-properties\": { \"path\": \"groups.properties\" } }, \"distributed-realm\": {} }] } } }",
"server: security: securityRealms: - name: \"distributed-realm\" ldapRealm: principal: \"uid=admin,ou=People,dc=infinispan,dc=org\" url: \"ldap://my-ldap-server:10389\" credential: \"strongPassword\" identityMapping: rdnIdentifier: \"uid\" searchDn: \"ou=People,dc=infinispan,dc=org\" searchRecursive: \"false\" attributeMapping: attribute: filter: \"(&(objectClass=groupOfNames)(member={1}))\" filterDn: \"ou=Roles,dc=infinispan,dc=org\" from: \"cn\" to: \"Roles\" propertiesRealm: groupsAttribute: \"Roles\" userProperties: digestRealmName: \"distributed-realm\" path: \"users.properties\" groupProperties: path: \"groups.properties\" distributedRealm: ~",
"<server xmlns=\"urn:infinispan:server:15.0\"> <security> <security-realms> <security-realm name=\"default\" default-realm=\"aggregate\"> <server-identities> <ssl> <keystore path=\"server.pfx\" password=\"secret\" alias=\"server\"/> <truststore path=\"trust.pfx\" password=\"secret\"/> </ssl> </server-identities> <properties-realm name=\"properties\" groups-attribute=\"Roles\"> <user-properties path=\"users.properties\" relative-to=\"infinispan.server.config.path\"/> <group-properties path=\"groups.properties\" relative-to=\"infinispan.server.config.path\"/> </properties-realm> <truststore-realm name=\"trust\"/> <aggregate-realm authentication-realm=\"trust\" authorization-realms=\"properties\"> <name-rewriter> <common-name-principal-transformer/> </name-rewriter> </aggregate-realm> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [ { \"name\": \"aggregate-realm\", \"default-realm\": \"aggregate\", \"server-identities\": { \"ssl\": { \"keystore\": { \"path\": \"server.p12\", \"relative-to\": \"infinispan.server.config.path\", \"keystore-password\": \"secret\", \"alias\": \"server\" }, \"truststore\": { \"path\": \"trust.p12\", \"relative-to\": \"infinispan.server.config.path\", \"password\": \"secret\" } } }, \"properties-realm\": { \"name\": \"properties\", \"groups-attribute\": \"Roles\", \"user-properties\": { \"digest-realm-name\": \"distributed-realm\", \"path\": \"users.properties\" }, \"group-properties\": { \"path\": \"groups.properties\" } }, \"truststore-realm\": { \"name\": \"trust\" }, \"aggregate-realm\": { \"authentication-realm\": \"trust\", \"authorization-realms\": [\"properties\"], \"name-rewriter\": { \"common-name-principal-transformer\": {} } } } ] } } }",
"server: security: securityRealms: - name: \"aggregate-realm\" defaultRealm: \"aggregate\" serverIdentities: ssl: keystore: path: \"server.p12\" relative-to: \"infinispan.server.config.path\" keystore-password: \"secret\" alias: \"server\" truststore: path: \"trust.p12\" relative-to: \"infinispan.server.config.path\" password: \"secret\" truststoreRealm: name: \"trust\" propertiesRealm: name: \"properties\" groupsAttribute: \"Roles\" userProperties: digestRealmName: \"distributed-realm\" path: \"users.properties\" groupProperties: path: \"groups.properties\" aggregateRealm: authenticationRealm: \"trust\" authorizationRealms: - \"properties\" nameRewriter: common-name-principal-transformer: ~",
"<aggregate-realm authentication-realm=\"trust\" authorization-realms=\"properties\"> <name-rewriter> <case-principal-transformer uppercase=\"false\"/> </name-rewriter> </aggregate-realm>",
"{ \"aggregate-realm\": { \"authentication-realm\": \"trust\", \"authorization-realms\": [ \"properties\" ], \"name-rewriter\": { \"case-principal-transformer\": { \"uppercase\": \"false\" } } } }",
"aggregateRealm: authenticationRealm: \"trust\" authorizationRealms: - \"properties\" nameRewriter: casePrincipalTransformer: uppercase: false",
"<aggregate-realm authentication-realm=\"trust\" authorization-realms=\"properties\"> <name-rewriter> <common-name-principal-transformer/> </name-rewriter> </aggregate-realm>",
"{ \"aggregate-realm\": { \"authentication-realm\": \"trust\", \"authorization-realms\": [ \"properties\" ], \"name-rewriter\": { \"common-name-principal-transformer\": {} } } }",
"aggregateRealm: authenticationRealm: \"trust\" authorizationRealms: - \"properties\" nameRewriter: commonNamePrincipalTransformer: ~",
"<aggregate-realm authentication-realm=\"trust\" authorization-realms=\"properties\"> <name-rewriter> <regex-principal-transformer pattern=\"([^@]+)@.*\" replacement=\"USD1\" replace-all=\"false\"/> </name-rewriter> </aggregate-realm>",
"{ \"aggregate-realm\": { \"authentication-realm\": \"trust\", \"authorization-realms\": [ \"properties\" ], \"name-rewriter\": { \"regex-principal-transformer\": { \"pattern\" : \"([^@]+)@.*\", \"replacement\": \"USD1\", \"replace-all\": false } } } }",
"aggregateRealm: authenticationRealm: \"trust\" authorizationRealms: - \"properties\" nameRewriter: regexPrincipalTransformer: pattern: \"([^@]+)@.*\" replacement: \"USD1\" replaceAll: false",
"<server xmlns=\"urn:infinispan:server:15.0\"> <security> <security-realms> <security-realm name=\"default\" cache-max-size=\"1024\" cache-lifespan=\"120000\"> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"default\", \"cache-max-size\": 1024, \"cache-lifespan\": 120000 }] } } }",
"server: security: securityRealms: - name: \"default\" cache-max-size: 1024 cache-lifespan: 120000",
"[node-1@mycluster//containers/default]> server aclcache flush"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_server_guide/security-realms |
Chapter 6. Uninstalling CodeReady Workspaces | Chapter 6. Uninstalling CodeReady Workspaces This section describes uninstallation procedures for Red Hat CodeReady Workspaces. The uninstallation process leads to a complete removal of CodeReady Workspaces-related user data. The method previously used to install the CodeReady Workspaces instance determines the uninstallation method. For CodeReady Workspaces installed using OperatorHub, for the OpenShift Web Console method see Section 6.1, "Uninstalling CodeReady Workspaces after OperatorHub installation using the OpenShift web console" . For CodeReady Workspaces installed using OperatorHub, for the CLI method see Section 6.2, "Uninstalling CodeReady Workspaces after OperatorHub installation using OpenShift CLI" . For CodeReady Workspaces installed using crwctl, see Section 6.3, "Uninstalling CodeReady Workspaces after crwctl installation" 6.1. Uninstalling CodeReady Workspaces after OperatorHub installation using the OpenShift web console This section describes how to uninstall CodeReady Workspaces from a cluster using the OpenShift Administrator Perspective main menu. Prerequisites CodeReady Workspaces was installed on an OpenShift cluster using OperatorHub. Procedure Navigate to the OpenShift web console and select the Administrator Perspective. In the Home > Projects section, navigate to the project containing the CodeReady Workspaces instance. Note The default project name is <openshift-workspaces> . In the Operators > Installed Operators section, click Red Hat CodeReady Workspaces in the list of installed operators. In the Red Hat CodeReady Workspaces Cluster tab, click the displayed Red Hat CodeReady Workspaces Cluster, and select the Delete cluster option in the Actions drop-down menu on the top right. Note The default Red Hat CodeReady Workspaces checluster Custom Resource name is <codeready-workspaces> . In the Operators > Installed Operators section, click Red Hat CodeReady Workspaces in the list of installed operators and select the Uninstall Operator option in the Actions drop-down menu on the top right. In the Home > Projects section, navigate to the project containing the CodeReady Workspaces instance, and select the Delete Project option in the Actions drop-down menu on the top right. 6.2. Uninstalling CodeReady Workspaces after OperatorHub installation using OpenShift CLI This section provides instructions on how to uninstall a CodeReady Workspaces instance using oc commands. Prerequisites CodeReady Workspaces was installed on an OpenShift cluster using OperatorHub. The oc tool is available. Procedure The following procedure provides command-line outputs as examples. Note that output in the user terminal may differ. To uninstall a CodeReady Workspaces instance from a cluster: Sign in to the cluster: Switch to the project where the CodeReady Workspaces instance is deployed: Obtain the checluster Custom Resource name. The following shows a checluster Custom Resource named codeready-workspaces : Delete the CodeReady Workspaces cluster: Obtain the name of the CodeReady Workspaces cluster service version (CSV) module. The following detects a CSV module named codeready.v2.15 : Delete the CodeReady Workspaces CSV: 6.3. Uninstalling CodeReady Workspaces after crwctl installation This section describes how to uninstall an instance of Red Hat CodeReady Workspaces that was installed using the crwctl tool. Prerequisites The crwctl tool is available. The oc tool is available. The crwctl tool installed the CodeReady Workspaces instance on OpenShift. Procedure Sign in to the OpenShift cluster: Export the name of the CodeReady Workspaces namespace to remove: Export your user access token and Keycloak URLs: Stop the server using the UAT: Delete your project and your CodeReady Workspaces deployment: Verify that the removal was successful by listing the information about the project: Remove a specified ClusterRoleBinding : | [
"oc login -u <username> -p <password> <cluster_URL>",
"oc project <codeready-workspaces_project>",
"oc get checluster NAME AGE codeready-workspaces 27m",
"oc delete checluster codeready-workspaces checluster.org.eclipse.che \"codeready-workspaces\" deleted",
"oc get csv NAME DISPLAY VERSION REPLACES PHASE codeready.v2.15 Red Hat CodeReady Workspaces 2.15 codeready.v2.14 Succeeded",
"oc delete csv codeready.v2.15 clusterserviceversion.operators.coreos.com \"codeready.v2.15\" deleted",
"oc login -u <username> -p <password> <cluster_URL>",
"export codereadyNamespace=<codeready-namespace-to-remove>",
"export KEYCLOAK_BASE_URL=\"http://USDKEYCLOAK_URL/auth\"",
"export USER_ACCESS_TOKEN=USD(curl -X POST USDKEYCLOAK_BASE_URL/realms/codeready/protocol/openid-connect/token -H \"Content-Type: application/x-www-form-urlencoded\" -d \"username=admin\" -d \"password=admin\" -d \"grant_type=password\" -d \"client_id=codeready-public\" | jq -r .access_token)",
"crwctl/bin/crwctl server:stop -n \"USDcodereadyNamespace\" --access-token=USDUSER_ACCESS_TOKEN",
"oc project \"USDcodereadyNamespace\"",
"oc delete deployment codeready-operator",
"oc delete checluster codeready-workspaces",
"oc delete project \"USDcodereadyNamespace\"",
"oc describe project \"USDcodereadyNamespace\"",
"oc delete clusterrolebinding codeready-operator"
] | https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.15/html/installation_guide/uninstalling-codeready-workspaces_crw |
3.9. Relatime Drive Access Optimization | 3.9. Relatime Drive Access Optimization The POSIX standard requires that operating systems maintain file system metadata that record when each file was last accessed. This timestamp is called atime , and maintaining it requires a constant series of write operations to storage. These writes keep storage devices and their links busy and powered up. Since few applications make use of the atime data, this storage device activity wastes power. Significantly, the write to storage would occur even if the file was not read from storage, but from cache. For some time, the Linux kernel has supported a noatime option for mount and would not write atime data to file systems mounted with this option. However, simply turning off this feature is problematic because some applications rely on atime data and will fail if it is not available. The kernel used in Red Hat Enterprise Linux 6 supports another alternative - relatime . relatime maintains atime data, but not for each time that a file is accessed. With this option enabled, atime data is written to the disk only if the file has been modified since the atime data was last updated ( mtime ), or if the file was last accessed more than a certain amount of time ago (by default, one day). By default, all file systems are now mounted with relatime enabled. You can suppress it for any particular file system by mounting that file system with the norelatime option. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/power_management_guide/relatime |
Chapter 4. Installing a cluster on OpenStack on your own infrastructure | Chapter 4. Installing a cluster on OpenStack on your own infrastructure In OpenShift Container Platform version 4.16, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that runs on user-provisioned infrastructure. Using your own infrastructure allows you to integrate your cluster with existing infrastructure and modifications. The process requires more labor on your part than installer-provisioned installations, because you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. However, Red Hat provides Ansible playbooks to help you in the deployment process. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.16 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have an RHOSP account where you want to install OpenShift Container Platform. You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster . On the machine from which you run the installation program, you have: A single directory in which you can keep the files you create during the installation process Python 3 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 4.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 4.3.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 4.3.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 4.3.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 4.4. Downloading playbook dependencies The Ansible playbooks that simplify the installation process on user-provisioned infrastructure require several Python modules. On the machine where you will run the installer, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr ansible-collections-openstack Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 4.5. Downloading the installation playbooks Download Ansible playbooks that you can use to install OpenShift Container Platform on your own Red Hat OpenStack Platform (RHOSP) infrastructure. Prerequisites The curl command-line tool is available on your machine. Procedure To download the playbooks to your working directory, run the following script from a command line: USD xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/down-containers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/update-network-resources.yaml' The playbooks are downloaded to your machine. Important During the installation process, you can modify the playbooks to configure your deployment. Retain all playbooks for the life of your cluster. You must have the playbooks to remove your OpenShift Container Platform cluster from RHOSP. Important You must match any edits you make in the bootstrap.yaml , compute-nodes.yaml , control-plane.yaml , network.yaml , and security-groups.yaml files to the corresponding playbooks that are prefixed with down- . For example, edits to the bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not edit both files, the supported cluster removal process will fail. 4.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.8. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the latest RHCOS image, then upload it using the RHOSP CLI. Prerequisites The RHOSP CLI is installed. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.16 for Red Hat Enterprise Linux (RHEL) 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) . Decompress the image. Note You must decompress the RHOSP image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: USD file <name_of_downloaded_file> From the image that you downloaded, create an image that is named rhcos in your cluster by using the RHOSP CLI: USD openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. After you upload the image to RHOSP, it is usable in the installation process. 4.9. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 4.10. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 4.10.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster applications, and the bootstrap process. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP: USD openstack floating ip create --description "bootstrap machine" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the inventory.yaml file as the values of the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you use these values, you must also enter an external network as the value of the os_external_network variable in the inventory.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 4.10.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the inventory.yaml file, do not define the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you cannot provide an external network, you can also leave os_external_network blank. If you do not provide a value for os_external_network , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. Later in the installation process, when you create network resources, you must configure external connectivity on your own. If you run the installer with the wait-for command from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 4.11. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 4.12. Creating network resources on RHOSP Create the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. To save time, run supplied Ansible playbooks that generate security groups, networks, subnets, routers, and ports. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". Procedure For a dual stack cluster deployment, edit the inventory.yaml file and uncomment the os_subnet6 attribute. To ensure that your network resources have unique names on the RHOSP deployment, create an environment variable and JSON file for use in the Ansible playbooks: Create an environment variable that has a unique name value by running the following command: USD export OS_NET_ID="openshift-USD(dd if=/dev/urandom count=4 bs=1 2>/dev/null |hexdump -e '"%02x"')" Verify that the variable is set by running the following command on a command line: USD echo USDOS_NET_ID Create a JSON object that includes the variable in a file called netid.json by running the following command: USD echo "{\"os_net_id\": \"USDOS_NET_ID\"}" | tee netid.json On a command line, create the network resources by running the following command: USD ansible-playbook -i inventory.yaml network.yaml Note The API and Ingress VIP fields will be overwritten in the inventory.yaml playbook with the IP addresses assigned to the network ports. Note The resources created by the network.yaml playbook are deleted by the down-network.yaml playbook. 4.13. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. You now have the file install-config.yaml in the directory that you specified. Additional resources Installation configuration parameters for OpenStack 4.13.1. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIPs and platform.openstack.ingressVIPs that are outside of the DHCP allocation pool. Important The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace. 4.13.2. Sample customized install-config.yaml file for RHOSP The following example install-config.yaml files demonstrate all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. Example 4.1. Example single stack install-config.yaml file apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... Example 4.2. Example dual stack install-config.yaml file apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.25.0/24 - cidr: fd2e:6f44:5dd8:c956::/64 serviceNetwork: - 172.30.0.0/16 - fd02::/112 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiVIPs: - 192.168.25.10 - fd2e:6f44:5dd8:c956:f816:3eff:fec3:5955 ingressVIPs: - 192.168.25.132 - fd2e:6f44:5dd8:c956:f816:3eff:fe40:aecb controlPlanePort: fixedIPs: - subnet: name: openshift-dual4 - subnet: name: openshift-dual6 network: name: openshift-dual fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 4.13.3. Setting a custom subnet for machines The IP range that the installation program uses by default might not match the Neutron subnet that you create when you install OpenShift Container Platform. If necessary, update the CIDR value for new machines by editing the installation configuration file. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. You have Python 3 installed. Procedure On a command line, browse to the directory that contains the install-config.yaml and inventory.yaml files. From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run the following command: USD python -c 'import os import sys import yaml import re re_os_net_id = re.compile(r"{{\s*os_net_id\s*}}") os_net_id = os.getenv("OS_NET_ID") path = "common.yaml" facts = None for _dict in yaml.safe_load(open(path))[0]["tasks"]: if "os_network" in _dict.get("set_fact", {}): facts = _dict["set_fact"] break if not facts: print("Cannot find `os_network` in common.yaml file. Make sure OpenStack resource names are defined in one of the tasks.") sys.exit(1) os_network = re_os_net_id.sub(os_net_id, facts["os_network"]) os_subnet = re_os_net_id.sub(os_net_id, facts["os_subnet"]) path = "install-config.yaml" data = yaml.safe_load(open(path)) inventory = yaml.safe_load(open("inventory.yaml"))["all"]["hosts"]["localhost"] machine_net = [{"cidr": inventory["os_subnet_range"]}] api_vips = [inventory["os_apiVIP"]] ingress_vips = [inventory["os_ingressVIP"]] ctrl_plane_port = {"network": {"name": os_network}, "fixedIPs": [{"subnet": {"name": os_subnet}}]} if inventory.get("os_subnet6_range"): 1 os_subnet6 = re_os_net_id.sub(os_net_id, facts["os_subnet6"]) machine_net.append({"cidr": inventory["os_subnet6_range"]}) api_vips.append(inventory["os_apiVIP6"]) ingress_vips.append(inventory["os_ingressVIP6"]) data["networking"]["networkType"] = "OVNKubernetes" data["networking"]["clusterNetwork"].append({"cidr": inventory["cluster_network6_cidr"], "hostPrefix": inventory["cluster_network6_prefix"]}) data["networking"]["serviceNetwork"].append(inventory["service_subnet6_range"]) ctrl_plane_port["fixedIPs"].append({"subnet": {"name": os_subnet6}}) data["networking"]["machineNetwork"] = machine_net data["platform"]["openstack"]["apiVIPs"] = api_vips data["platform"]["openstack"]["ingressVIPs"] = ingress_vips data["platform"]["openstack"]["controlPlanePort"] = ctrl_plane_port del data["platform"]["openstack"]["externalDNS"] open(path, "w").write(yaml.dump(data, default_flow_style=False))' 1 Applies to dual stack (IPv4/IPv6) environments. 4.13.4. Emptying compute machine pools To proceed with an installation that uses your own infrastructure, set the number of compute machines in the installation configuration file to zero. Later, you create these machines manually. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["compute"][0]["replicas"] = 0; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set the value of compute.<first entry>.replicas to 0 . 4.13.5. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 4.13.5.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 4.13.5.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIPs property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIPs property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIPs and platform.openstack.ingressVIPs properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 1 2 In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 4.14. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines, compute machine sets, and control plane machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the compute machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Export the metadata file's infraID key as an environment variable: USD export INFRA_ID=USD(jq -r .infraID metadata.json) Tip Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that you create. By doing so, you avoid name conflicts when making multiple deployments in the same project. 4.15. Preparing the bootstrap Ignition files The OpenShift Container Platform installation process relies on bootstrap machines that are created from a bootstrap Ignition configuration file. Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat OpenStack Platform (RHOSP) uses to download the primary file. Prerequisites You have the bootstrap Ignition file that the installer program generates, bootstrap.ign . The infrastructure ID from the installer's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see Creating the Kubernetes manifest and Ignition config files . You have an HTTP(S)-accessible way to store the bootstrap Ignition file. The documented procedure uses the RHOSP image service (Glance), but you can also use the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova server. Procedure Run the following Python script. The script modifies the bootstrap Ignition file to set the hostname and, if available, CA certificate file when it runs: import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f) Using the RHOSP CLI, create an image that uses the bootstrap Ignition file: USD openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name> Get the image's details: USD openstack image show <image_name> Make a note of the file value; it follows the pattern v2/images/<image_ID>/file . Note Verify that the image you created is active. Retrieve the image service's public address: USD openstack catalog show image Combine the public address with the image file value and save the result as the storage location. The location follows the pattern <image_service_public_URL>/v2/images/<image_ID>/file . Generate an auth token and save the token ID: USD openstack token issue -c id -f value Insert the following content into a file called USDINFRA_ID-bootstrap-ignition.json and edit the placeholders to match your own values: { "ignition": { "config": { "merge": [{ "source": "<storage_url>", 1 "httpHeaders": [{ "name": "X-Auth-Token", 2 "value": "<token_ID>" 3 }] }] }, "security": { "tls": { "certificateAuthorities": [{ "source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4 }] } }, "version": "3.2.0" } } 1 Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage URL. 2 Set name in httpHeaders to "X-Auth-Token" . 3 Set value in httpHeaders to your token's ID. 4 If the bootstrap Ignition file server uses a self-signed certificate, include the base64-encoded certificate. Save the secondary Ignition config file. The bootstrap Ignition data will be passed to RHOSP during installation. Warning The bootstrap Ignition file contains sensitive information, like clouds.yaml credentials. Ensure that you store it in a secure place, and delete it after you complete the installation process. 4.16. Creating control plane Ignition config files on RHOSP Installing OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) on your own infrastructure requires control plane Ignition config files. You must create multiple config files. Note As with the bootstrap Ignition configuration, you must explicitly define a hostname for each control plane machine. Prerequisites The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files". Procedure On a command line, run the following Python script: USD for index in USD(seq 0 2); do MASTER_HOSTNAME="USDINFRA_ID-master-USDindex\n" python -c "import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)" <master.ign >"USDINFRA_ID-master-USDindex-ignition.json" done You now have three control plane Ignition files: <INFRA_ID>-master-0-ignition.json , <INFRA_ID>-master-1-ignition.json , and <INFRA_ID>-master-2-ignition.json . 4.17. Updating network resources on RHOSP Update the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". Procedure Optional: Add an external network value to the inventory.yaml playbook: Example external network value in the inventory.yaml Ansible Playbook ... # The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external' ... Important If you did not provide a value for os_external_network in the inventory.yaml file, you must ensure that VMs can access Glance and an external connection yourself. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml playbook: Example FIP values in the inventory.yaml Ansible Playbook ... # OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20' Important If you do not define values for os_api_fip and os_ingress_fip , you must perform postinstallation network configuration. If you do not define a value for os_bootstrap_fip , the installation program cannot download debugging information from failed installations. See "Enabling access to the environment" for more information. On a command line, create security groups by running the security-groups.yaml playbook: USD ansible-playbook -i inventory.yaml security-groups.yaml On a command line, update the network resources by running the update-network-resources.yaml playbook: USD ansible-playbook -i inventory.yaml update-network-resources.yaml 1 1 This playbook will add tags to the network, subnets, ports, and router. It also attaches floating IP addresses to the API and Ingress ports and sets the security groups for those ports. Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI command: USD openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> "USDINFRA_ID-nodes" Optional: You can use the inventory.yaml file that you created to customize your installation. For example, you can deploy a cluster that uses bare metal machines. 4.17.1. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the inventory.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . If your cluster runs on an RHOSP version that is more than 16.1.6 and less than 16.2.4, bare metal workers do not function due to a known issue that causes the metadata service to be unavailable for services on OpenShift Container Platform nodes. The RHOSP network supports both VM and bare metal server attachment. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an inventory.yaml file as part of the OpenShift Container Platform installation process. Procedure In the inventory.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of os_flavor_master to a bare metal flavor. Change the value of os_flavor_worker to a bare metal flavor. An example bare metal inventory.yaml file all: hosts: localhost: ansible_connection: local ansible_python_interpreter: "{{ansible_playbook_python}}" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external' ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. Use the updated inventory.yaml file to complete the installation process. Machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug 4.18. Creating the bootstrap machine on RHOSP Create a bootstrap machine and give it the network access it needs to run on Red Hat OpenStack Platform (RHOSP). Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and bootstrap.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml bootstrap.yaml After the bootstrap server is active, view the logs to verify that the Ignition files were received: USD openstack console log show "USDINFRA_ID-bootstrap" 4.19. Creating the control plane machines on RHOSP Create three control plane machines by using the Ignition config files that you generated. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). The inventory.yaml , common.yaml , and control-plane.yaml Ansible playbooks are in a common directory. You have the three Ignition files that were created in "Creating control plane Ignition config files". Procedure On a command line, change the working directory to the location of the playbooks. If the control plane Ignition config files are not already in your working directory, copy them into it. On a command line, run the control-plane.yaml playbook: USD ansible-playbook -i inventory.yaml control-plane.yaml Run the following command to monitor the bootstrapping process: USD openshift-install wait-for bootstrap-complete You will see messages that confirm that the control plane machines are running and have joined the cluster: INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete... ... INFO It is now safe to remove the bootstrap resources 4.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 4.21. Deleting bootstrap resources from RHOSP Delete the bootstrap resources that you no longer need. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and down-bootstrap.yaml Ansible playbooks are in a common directory. The control plane machines are running. If you do not know the status of the machines, see "Verifying cluster status". Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the down-bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml down-bootstrap.yaml The bootstrap port, server, and floating IP address are deleted. Warning If you did not disable the bootstrap Ignition file URL earlier, do so now. 4.22. Creating compute machines on RHOSP After standing up the control plane, create compute machines. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and compute-nodes.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. The control plane is active. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the playbook: USD ansible-playbook -i inventory.yaml compute-nodes.yaml steps Approve the certificate signing requests for the machines. 4.23. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 4.24. Verifying a successful installation Verify that the OpenShift Container Platform installation is complete. Prerequisites You have the installation program ( openshift-install ) Procedure On a command line, enter: USD openshift-install --log-level debug wait-for install-complete The program outputs the console URL, as well as the administrator's login information. 4.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 4.26. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . | [
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr ansible-collections-openstack",
"sudo alternatives --set python /usr/bin/python3",
"xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/down-containers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.16/upi/openstack/update-network-resources.yaml'",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"file <name_of_downloaded_file>",
"openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"bootstrap machine\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"export OS_NET_ID=\"openshift-USD(dd if=/dev/urandom count=4 bs=1 2>/dev/null |hexdump -e '\"%02x\"')\"",
"echo USDOS_NET_ID",
"echo \"{\\\"os_net_id\\\": \\\"USDOS_NET_ID\\\"}\" | tee netid.json",
"ansible-playbook -i inventory.yaml network.yaml",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.25.0/24 - cidr: fd2e:6f44:5dd8:c956::/64 serviceNetwork: - 172.30.0.0/16 - fd02::/112 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiVIPs: - 192.168.25.10 - fd2e:6f44:5dd8:c956:f816:3eff:fec3:5955 ingressVIPs: - 192.168.25.132 - fd2e:6f44:5dd8:c956:f816:3eff:fe40:aecb controlPlanePort: fixedIPs: - subnet: name: openshift-dual4 - subnet: name: openshift-dual6 network: name: openshift-dual fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"python -c 'import os import sys import yaml import re re_os_net_id = re.compile(r\"{{\\s*os_net_id\\s*}}\") os_net_id = os.getenv(\"OS_NET_ID\") path = \"common.yaml\" facts = None for _dict in yaml.safe_load(open(path))[0][\"tasks\"]: if \"os_network\" in _dict.get(\"set_fact\", {}): facts = _dict[\"set_fact\"] break if not facts: print(\"Cannot find `os_network` in common.yaml file. Make sure OpenStack resource names are defined in one of the tasks.\") sys.exit(1) os_network = re_os_net_id.sub(os_net_id, facts[\"os_network\"]) os_subnet = re_os_net_id.sub(os_net_id, facts[\"os_subnet\"]) path = \"install-config.yaml\" data = yaml.safe_load(open(path)) inventory = yaml.safe_load(open(\"inventory.yaml\"))[\"all\"][\"hosts\"][\"localhost\"] machine_net = [{\"cidr\": inventory[\"os_subnet_range\"]}] api_vips = [inventory[\"os_apiVIP\"]] ingress_vips = [inventory[\"os_ingressVIP\"]] ctrl_plane_port = {\"network\": {\"name\": os_network}, \"fixedIPs\": [{\"subnet\": {\"name\": os_subnet}}]} if inventory.get(\"os_subnet6_range\"): 1 os_subnet6 = re_os_net_id.sub(os_net_id, facts[\"os_subnet6\"]) machine_net.append({\"cidr\": inventory[\"os_subnet6_range\"]}) api_vips.append(inventory[\"os_apiVIP6\"]) ingress_vips.append(inventory[\"os_ingressVIP6\"]) data[\"networking\"][\"networkType\"] = \"OVNKubernetes\" data[\"networking\"][\"clusterNetwork\"].append({\"cidr\": inventory[\"cluster_network6_cidr\"], \"hostPrefix\": inventory[\"cluster_network6_prefix\"]}) data[\"networking\"][\"serviceNetwork\"].append(inventory[\"service_subnet6_range\"]) ctrl_plane_port[\"fixedIPs\"].append({\"subnet\": {\"name\": os_subnet6}}) data[\"networking\"][\"machineNetwork\"] = machine_net data[\"platform\"][\"openstack\"][\"apiVIPs\"] = api_vips data[\"platform\"][\"openstack\"][\"ingressVIPs\"] = ingress_vips data[\"platform\"][\"openstack\"][\"controlPlanePort\"] = ctrl_plane_port del data[\"platform\"][\"openstack\"][\"externalDNS\"] open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"export INFRA_ID=USD(jq -r .infraID metadata.json)",
"import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)",
"openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>",
"openstack image show <image_name>",
"openstack catalog show image",
"openstack token issue -c id -f value",
"{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }",
"for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done",
"# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'",
"# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'",
"ansible-playbook -i inventory.yaml security-groups.yaml",
"ansible-playbook -i inventory.yaml update-network-resources.yaml 1",
"openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"",
"all: hosts: localhost: ansible_connection: local ansible_python_interpreter: \"{{ansible_playbook_python}}\" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external'",
"./openshift-install wait-for install-complete --log-level debug",
"ansible-playbook -i inventory.yaml bootstrap.yaml",
"openstack console log show \"USDINFRA_ID-bootstrap\"",
"ansible-playbook -i inventory.yaml control-plane.yaml",
"openshift-install wait-for bootstrap-complete",
"INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml",
"ansible-playbook -i inventory.yaml compute-nodes.yaml",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4",
"openshift-install --log-level debug wait-for install-complete"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_openstack/installing-openstack-user |
Preface | Preface The Red Hat build of Cryostat is a container-native implementation of JDK Flight Recorder (JFR) that you can use to securely monitor the Java Virtual Machine (JVM) performance in workloads that run on an OpenShift Container Platform cluster. You can use Cryostat 3.0 to start, stop, retrieve, archive, import, and export JFR data for JVMs inside your containerized applications by using a web console or an HTTP API. Depending on your use case, you can store and analyze your recordings directly on your Red Hat OpenShift cluster by using the built-in tools that Cryostat provides or you can export recordings to an external monitoring application to perform a more in-depth analysis of your recorded data. Important Red Hat build of Cryostat is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/configuring_advanced_cryostat_configurations/preface-cryostat |
4.7.4. Testing the Fence Configuration | 4.7.4. Testing the Fence Configuration As of Red Hat Enterprise Linux Release 6.4, you can test the fence configuration for each node in a cluster with the fence_check utility. The following example shows the output of a successful execution of this command. For information on this utility, see the fence_check (8) man page. | [
"fence_check fence_check run at Wed Jul 23 09:13:57 CDT 2014 pid: 4769 Testing host-098 method 1: success Testing host-099 method 1: success Testing host-100 method 1: success"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s2-fence-configurationtest-conga-ca |
8.4.3. Remote Node Resource Options | 8.4.3. Remote Node Resource Options You configure a remote node as a cluster resource with the pcs resource create command, specifying ocf:pacemaker:remote as the resource type. Table 8.5, "Resource Options for Remote Nodes" describes the resource options you can configure for a remote resource. Table 8.5. Resource Options for Remote Nodes Field Default Description reconnect_interval 0 Time in seconds to wait before attempting to reconnect to a remote node after an active connection to the remote node has been severed. This wait is recurring. If reconnect fails after the wait period, a new reconnect attempt will be made after observing the wait time. When this option is in use, Pacemaker will keep attempting to reach out and connect to the remote node indefinitely after each wait interval. server Server location to connect to. This can be an IP address or host name. port TCP port to connect to. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/remote_node_options |
Chapter 2. Installing and using Python | Chapter 2. Installing and using Python In RHEL 9, Python 3.9 is the default Python implementation. Since RHEL 9.2, Python 3.11 is available as the python3.11 package suite, and since RHEL 9.4, Python 3.12 as the python3.12 package suite. The unversioned python command points to the default Python 3.9 version. 2.1. Installing Python 3 The default Python implementation is usually installed by default. To install it manually, use the following procedure. Procedure To install Python 3.9 , use: To install Python 3.11 , use: To install Python 3.12 , use: Verification steps To verify the Python version installed on your system, use the --version option with the python command specific for your required version of Python . For Python 3.9 : For Python 3.11 : For Python 3.12 : 2.2. Installing additional Python 3 packages Packages prefixed with python3- contain add-on modules for the default Python 3.9 version. Packages prefixed with python3.11- contain add-on modules for Python 3.11 . Packages prefixed with python3.12- contain add-on modules for Python 3.12 . Procedure To install the Requests module for Python 3.9 , use: To install the pip package installer from Python 3.9 , use: To install the pip package installer from Python 3.11 , use: To install the pip package installer from Python 3.12 , use: Additional resources Upstream documentation about Python add-on modules 2.3. Installing additional Python 3 tools for developers Additional Python tools for developers are distributed mostly through the CodeReady Linux Builder (CRB) repository. The python3-pytest package and its dependencies are available in the AppStream repository. The CRB repository contains, for example, the following packages: python3*-idle python3*-debug python3*-Cython python3.11-pytest and its dependencies python3.12-pytest and its dependencies. Important The content in the CodeReady Linux Builder repository is unsupported by Red Hat. Note Not all upstream Python -related packages are available in RHEL. To install packages from the CRB repository, use the following procedure. Procedure Enable the CodeReady Linux Builder repository: Install the python3*-Cython package: For Python 3.9 : For Python 3.11 : For Python 3.12 : Additional resources How to enable and make use of content within CodeReady Linux Builder Package manifest 2.4. Using Python The following procedure contains examples of running the Python interpreter or Python -related commands. Prerequisites Ensure that Python is installed. If you want to download and install third-party applications for Python 3.11 or Python 3.12 , install the python3.11-pip or python3.12-pip package. Procedure To run the Python 3.9 interpreter or related commands, use, for example: To run the Python 3.11 interpreter or related commands, use, for example: To run the Python 3.12 interpreter or related commands, use, for example: | [
"dnf install python3",
"dnf install python3.11",
"dnf install python3.12",
"python3 --version",
"python3.11 --version",
"python3.12 --version",
"dnf install python3-requests",
"dnf install python3-pip",
"dnf install python3.11-pip",
"dnf install python3.12-pip",
"subscription-manager repos --enable codeready-builder-for-rhel-9-x86_64-rpms",
"dnf install python3-Cython",
"dnf install python3.11-Cython",
"dnf install python3.12-Cython",
"python3 python3 -m venv --help python3 -m pip install package pip3 install package",
"python3.11 python3.11 -m venv --help python3.11 -m pip install package pip3.11 install package",
"python3.12 python3.12 -m venv --help python3.12 -m pip install package pip3.12 install package"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_and_using_dynamic_programming_languages/assembly_installing-and-using-python_installing-and-using-dynamic-programming-languages |
Chapter 16. Kerberos PKINIT authentication in IdM | Chapter 16. Kerberos PKINIT authentication in IdM Public Key Cryptography for Initial Authentication in Kerberos (PKINIT) is a preauthentication mechanism for Kerberos. The Identity Management (IdM) server includes a mechanism for Kerberos PKINIT authentication. 16.1. Default PKINIT configuration The default PKINIT configuration on your IdM servers depends on the certificate authority (CA) configuration. Table 16.1. Default PKINIT configuration in IdM CA configuration PKINIT configuration Without a CA, no external PKINIT certificate provided Local PKINIT: IdM only uses PKINIT for internal purposes on servers. Without a CA, external PKINIT certificate provided to IdM IdM configures PKINIT by using the external Kerberos key distribution center (KDC) certificate and CA certificate. With an Integrated CA IdM configures PKINIT by using the certificate signed by the IdM CA. 16.2. Displaying the current PKINIT configuration IdM provides multiple commands you can use to query the PKINIT configuration in your domain. Procedure To determine the PKINIT status in your domain, use the ipa pkinit-status command: The command displays the PKINIT configuration status as enabled or disabled : enabled : PKINIT is configured using a certificate signed by the integrated IdM CA or an external PKINIT certificate. disabled : IdM only uses PKINIT for internal purposes on IdM servers. To list the IdM servers with active Kerberos key distribution centers (KDCs) that support PKINIT for IdM clients, use the ipa config-show command on any server: 16.3. Configuring PKINIT in IdM If your IdM servers are running with PKINIT disabled, use these steps to enable it. For example, a server is running with PKINIT disabled if you passed the --no-pkinit option with the ipa-server-install or ipa-replica-install utilities. Prerequisites Ensure that all IdM servers with a certificate authority (CA) installed are running on the same domain level. Procedure Check if PKINIT is enabled on the server: If PKINIT is disabled, you will see the following output: You can also use the command to find all the servers where PKINIT is enabled if you omit the --server <server_fqdn> parameter. If you are using IdM without CA: On the IdM server, install the CA certificate that signed the Kerberos key distribution center (KDC) certificate: To update all IPA hosts, repeat the ipa-certupdate command on all replicas and clients: Check if the CA certificate has already been added using the ipa-cacert-manage list command. For example: Use the ipa-server-certinstall utility to install an external KDC certificate. The KDC certificate must meet the following conditions: It is issued with the common name CN= fully_qualified_domain_name,certificate_subject_base . It includes the Kerberos principal krbtgt/ REALM_NAME@REALM_NAME . It contains the Object Identifier (OID) for KDC authentication: 1.3.6.1.5.2.3.5. See your PKINIT status: If you are using IdM with a CA certificate, enable PKINIT as follows: If you are using an IdM CA, the command requests a PKINIT KDC certificate from the CA. Additional resources ipa-server-certinstall(1) man page on your system 16.4. Additional resources For details on Kerberos PKINIT, PKINIT configuration in the MIT Kerberos Documentation. | [
"ipa pkinit-status Server name: server1.example.com PKINIT status: enabled [...output truncated...] Server name: server2.example.com PKINIT status: disabled [...output truncated...]",
"ipa config-show Maximum username length: 32 Home directory base: /home Default shell: /bin/sh Default users group: ipausers [...output truncated...] IPA masters capable of PKINIT: server1.example.com [...output truncated...]",
"kinit admin Password for [email protected]: ipa pkinit-status --server=server.idm.example.com 1 server matched ---------------- Server name: server.idm.example.com PKINIT status:enabled ---------------------------- Number of entries returned 1 ----------------------------",
"ipa pkinit-status --server server.idm.example.com ----------------- 0 servers matched ----------------- ---------------------------- Number of entries returned 0 ----------------------------",
"ipa-cacert-manage install -t CT,C,C ca.pem",
"ipa-certupdate",
"ipa-cacert-manage list CN=CA,O=Example Organization The ipa-cacert-manage command was successful",
"ipa-server-certinstall --kdc kdc.pem kdc.key systemctl restart krb5kdc.service",
"ipa pkinit-status Server name: server1.example.com PKINIT status: enabled [...output truncated...] Server name: server2.example.com PKINIT status: disabled [...output truncated...]",
"ipa-pkinit-manage enable Configuring Kerberos KDC (krb5kdc) [1/1]: installing X509 Certificate for PKINIT Done configuring Kerberos KDC (krb5kdc). The ipa-pkinit-manage command was successful"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/kerberos-pkinit-authentication-in-idm_managing-users-groups-hosts |
4.9. Configuring Global Cluster Resources | 4.9. Configuring Global Cluster Resources You can configure global resources that can be used by any service running in the cluster, and you can configure resources that are available only to a specific service. To add a global cluster resource, follow the steps in this section. You can add a resource that is local to a particular service when you configure the service, as described in Section 4.10, "Adding a Cluster Service to the Cluster" . From the cluster-specific page, you can add resources to that cluster by clicking on Resources along the top of the cluster display. This displays the resources that have been configured for that cluster. Click Add . This displays the Add Resource to Cluster drop-down menu. Click the drop-down box under Add Resource to Cluster and select the type of resource to configure. Enter the resource parameters for the resource you are adding. Appendix B, HA Resource Parameters describes resource parameters. Click Submit . Clicking Submit returns to the resources page that displays the display of Resources , which displays the added resource (and other resources). To modify an existing resource, perform the following steps. From the luci Resources page, click on the name of the resource to modify. This displays the parameters for that resource. Edit the resource parameters. Click Apply . To delete an existing resource, perform the following steps. From the luci Resources page, click the check box for any resources to delete. Click Delete . As of the Red Hat Enterprise Linux 6.6 release, you can sort the columns in a resource list by clicking on the header for the sort category. Clicking on the Name/IP header once sorts the resources alphabetically, according to resource name. Clicking on the Name/IP header a second time sourts the resources in reverse alphabetic order, according to resource name. Clicking on the Type header once sorts the resources alphabetically, according to resource type. Clicking on the Type header a second time sourts the resources in reverse alphabetic order, according to resource type. Clicking on the In Use header once sorts the resources so that they are grouped according to whether they are in use or not. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-add-resource-conga-ca |
3.3.8. Cache Volumes | 3.3.8. Cache Volumes As of the Red Hat Enterprise Linux 6.7 release, LVM supports the use of fast block devices (such as SSD drives) as write-back or write-though caches for larger slower block devices. Users can create cache logical volumes to improve the performance of their existing logical volumes or create new cache logical volumes composed of a small and fast device coupled with a large and slow device. For information on creating LVM cache volumes, see Section 5.4.7, "Creating LVM Cache Logical Volumes" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/cache_volumes |
3.5. Tracking Tag History | 3.5. Tracking Tag History The ETL Service collects tag information as displayed in the Administration Portal every minute and stores this data in the tags historical tables. The ETL Service tracks five types of changes: A tag is created in the Administration Portal - the ETL Service copies the tag details, position in the tag tree and relation to other objects in the tag tree. A entity is attached to the tag tree in the Administration Portal - the ETL Service replicates the addition to the ovirt_engine_history database as a new entry. A tag is updated - the ETL Service replicates the change of tag details to the ovirt_engine_history database as a new entry. An entity or tag branch is removed from the Administration Portal - the ovirt_engine_history database flags the corresponding tag and relations as removed in new entries. Removed tags and relations are only flagged as removed or detached. A tag branch is moved - the corresponding tag and relations are updated as new entries. Moved tags and relations are only flagged as updated. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/data_warehouse_guide/tracking_tag_history |
Chapter 22. Write Barriers | Chapter 22. Write Barriers A write barrier is a kernel mechanism used to ensure that file system metadata is correctly written and ordered on persistent storage, even when storage devices with volatile write caches lose power. File systems with write barriers enabled also ensure that data transmitted via fsync() is persistent throughout a power loss. Enabling write barriers incurs a substantial performance penalty for some applications. Specifically, applications that use fsync() heavily or create and delete many small files will likely run much slower. 22.1. Importance of Write Barriers File systems take great care to safely update metadata, ensuring consistency. Journalled file systems bundle metadata updates into transactions and send them to persistent storage in the following manner: First, the file system sends the body of the transaction to the storage device. Then, the file system sends a commit block. If the transaction and its corresponding commit block are written to disk, the file system assumes that the transaction will survive any power failure. However, file system integrity during power failure becomes more complex for storage devices with extra caches. Storage target devices like local S-ATA or SAS drives may have write caches ranging from 32MB to 64MB in size (with modern drives). Hardware RAID controllers often contain internal write caches. Further, high end arrays, like those from NetApp, IBM, Hitachi and EMC (among others), also have large caches. Storage devices with write caches report I/O as "complete" when the data is in cache; if the cache loses power, it loses its data as well. Worse, as the cache de-stages to persistent storage, it may change the original metadata ordering. When this occurs, the commit block may be present on disk without having the complete, associated transaction in place. As a result, the journal may replay these uninitialized transaction blocks into the file system during post-power-loss recovery; this will cause data inconsistency and corruption. How Write Barriers Work Write barriers are implemented in the Linux kernel via storage write cache flushes before and after the I/O, which is order-critical . After the transaction is written, the storage cache is flushed, the commit block is written, and the cache is flushed again. This ensures that: The disk contains all the data. No re-ordering has occurred. With barriers enabled, an fsync() call will also issue a storage cache flush. This guarantees that file data is persistent on disk even if power loss occurs shortly after fsync() returns. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/ch-writebarriers |
Chapter 5. Kafka Bridge interface | Chapter 5. Kafka Bridge interface The Kafka Bridge provides a RESTful interface that allows HTTP-based clients to interact with a Kafka cluster. It offers the advantages of a HTTP API connection to Streams for Apache Kafka for clients to produce and consume messages without the requirement to use the native Kafka protocol. The API has two main resources - consumers and topics - that are exposed and made accessible through endpoints to interact with consumers and producers in your Kafka cluster. The resources relate only to the Kafka Bridge, not the consumers and producers connected directly to Kafka. 5.1. HTTP requests The Kafka Bridge supports HTTP requests to a Kafka cluster, with methods to perform operations such as the following: Send messages to a topic. Retrieve messages from topics. Retrieve a list of partitions for a topic. Create and delete consumers. Subscribe consumers to topics, so that they start receiving messages from those topics. Retrieve a list of topics that a consumer is subscribed to. Unsubscribe consumers from topics. Assign partitions to consumers. Commit a list of consumer offsets. Seek on a partition, so that a consumer starts receiving messages from the first or last offset position, or a given offset position. The methods provide JSON responses and HTTP response code error handling. Messages can be sent in JSON or binary formats. Additional resources To view the API documentation, including example requests and responses, see Using the Streams for Apache Kafka Bridge . 5.2. Supported clients for the Kafka Bridge You can use the Kafka Bridge to integrate both internal and external HTTP client applications with your Kafka cluster. Internal clients Internal clients are container-based HTTP clients running in the same OpenShift cluster as the Kafka Bridge itself. Internal clients can access the Kafka Bridge on the host and port defined in the KafkaBridge custom resource. External clients External clients are HTTP clients running outside the OpenShift cluster in which the Kafka Bridge is deployed and running. External clients can access the Kafka Bridge through an OpenShift Route, a loadbalancer service, or using an Ingress. HTTP internal and external client integration | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_on_openshift_overview/overview-components-kafka-bridge_str |
Chapter 13. Valgrind | Chapter 13. Valgrind Valgrind is an instrumentation framework that ships with a number of tools for profiling applications. It can be used to detect various memory errors and memory-management problems, such as the use of uninitialized memory or an improper allocation and freeing of memory, or to identify the use of improper arguments in system calls. For a complete list of profiling tools that are distributed with the Red Hat Developer Toolset version of Valgrind , see Table 13.1, "Tools Distributed with Valgrind for Red Hat Developer Toolset" . Valgrind profiles an application by rewriting it and instrumenting the rewritten binary. This allows you to profile your application without the need to recompile it, but it also makes Valgrind significantly slower than other profilers, especially when performing extremely detailed runs. It is therefore not suited to debugging time-specific issues, or kernel-space debugging. Red Hat Developer Toolset is distributed with Valgrind 3.19.0 . This version is more recent than the version included in the release of Red Hat Developer Toolset and provides numerous bug fixes and enhancements. Table 13.1. Tools Distributed with Valgrind for Red Hat Developer Toolset Name Description Memcheck Detects memory management problems by intercepting system calls and checking all read and write operations. Cachegrind Identifies the sources of cache misses by simulating the level 1 instruction cache ( I1 ), level 1 data cache ( D1 ), and unified level 2 cache ( L2 ). Callgrind Generates a call graph representing the function call history. Helgrind Detects synchronization errors in multithreaded C, C++, and Fortran programs that use POSIX threading primitives. DRD Detects errors in multithreaded C and C++ programs that use POSIX threading primitives or any other threading concepts that are built on top of these POSIX threading primitives. Massif Monitors heap and stack usage. 13.1. Installing Valgrind In Red Hat Developer Toolset, Valgrind is provided by the devtoolset-12-valgrind package and is automatically installed with devtoolset-12-perftools . For detailed instructions on how to install Red Hat Developer Toolset and related packages to your system, see Section 1.5, "Installing Red Hat Developer Toolset" . Note Note that if you use Valgrind in combination with the GNU Debugger , it is recommended that you use the version of GDB that is included in Red Hat Developer Toolset to ensure that all features are fully supported. 13.2. Using Valgrind To run any of the Valgrind tools on a program you want to profile: See Table 13.1, "Tools Distributed with Valgrind for Red Hat Developer Toolset" for a list of tools that are distributed with Valgrind . The argument of the --tool command line option must be specified in lower case, and if this option is omitted, Valgrind uses Memcheck by default. For example, to run Cachegrind on a program to identify the sources of cache misses: Note that you can execute any command using the scl utility, causing it to be run with the Red Hat Developer Toolset binaries used in preference to the Red Hat Enterprise Linux system equivalent. This allows you to run a shell session with Red Hat Developer Toolset Valgrind as default: Note To verify the version of Valgrind you are using at any point: Red Hat Developer Toolset's valgrind executable path will begin with /opt . Alternatively, you can use the following command to confirm that the version number matches that for Red Hat Developer Toolset Valgrind : 13.3. Additional Resources For more information about Valgrind and its features, see the resources listed below. Installed Documentation valgrind (1) - The manual page for the valgrind utility provides detailed information on how to use Valgrind. To display the manual page for the version included in Red Hat Developer Toolset: Valgrind Documentation - HTML documentation for Valgrind is located at /opt/rh/devtoolset-12/root/usr/share/doc/devtoolset-12-valgrind-3.19.0/html/index.html . Online Documentation Red Hat Enterprise Linux 7 Developer Guide - The Developer Guide for Red Hat Enterprise Linux 7 provides more information about Valgrind and its Eclipse plug-in. Red Hat Enterprise Linux 7 Performance Tuning Guide - The Performance Tuning Guide for Red Hat Enterprise Linux 7 provide more detailed information about using Valgrind to profile applications. See Also Chapter 1, Red Hat Developer Toolset - An overview of Red Hat Developer Toolset and more information on how to install it on your system. Chapter 11, memstomp - Instructions on using the memstomp utility to identify calls to library functions with overlapping memory regions that are not allowed by various standards. Chapter 12, SystemTap - An introduction to the SystemTap tool and instructions on how to use it to monitor the activities of a running system. Chapter 14, OProfile - Instructions on using the OProfile tool to determine which sections of code consume the greatest amount of CPU time and why. Chapter 15, Dyninst - Instructions on using the Dyninst library to instrument a user-space executable. | [
"scl enable devtoolset-12 'valgrind --tool= tool program argument ...'",
"scl enable devtoolset-12 'valgrind --tool=cachegrind program argument ...'",
"scl enable devtoolset-12 'bash'",
"which valgrind",
"valgrind --version",
"scl enable devtoolset-12 'man valgrind'"
] | https://docs.redhat.com/en/documentation/red_hat_developer_toolset/12/html/user_guide/chap-valgrind |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_2_release_notes/making-open-source-more-inclusive_2.4.57-release-notes |
Chapter 3. Monitoring Camel K operator | Chapter 3. Monitoring Camel K operator Red Hat Integration - Camel K monitoring is based on the OpenShift monitoring system . This chapter explains how to use the available options for monitoring Red Hat Integration - Camel K operator at runtime. You can use the Prometheus Operator that is already deployed as part of OpenShift Monitoring to monitor your own applications. Section 3.1, "Camel K Operator metrics" Section 3.2, "Enabling Camel K Operator monitoring" Section 3.3, "Camel K operator alerts" 3.1. Camel K Operator metrics The Camel K operator monitoring endpoint exposes the following metrics: Table 3.1. Camel K operator metrics Name Type Description Buckets Labels camel_k_reconciliation_duration_seconds HistogramVec Reconciliation request duration 0.25s, 0.5s, 1s, 5s namespace , group , version , kind , result : Reconciled | Errored | Requeued , tag : "" | PlatformError | UserError camel_k_build_duration_seconds HistogramVec Build duration 30s, 1m, 1.5m, 2m, 5m, 10m result : Succeeded | Error camel_k_build_recovery_attempts Histogram Build recovery attempts 0, 1, 2, 3, 4, 5 result : Succeeded | Error camel_k_build_queue_duration_seconds Histogram Build queue duration 5s, 15s, 30s, 1m, 5m, N/A camel_k_integration_first_readiness_seconds Histogram Time to first integration readiness 5s, 10s, 30s, 1m, 2m N/A 3.2. Enabling Camel K Operator monitoring OpenShift 4.3 or higher includes an embedded Prometheus Operator already deployed as part of OpenShift Monitoring. This section explains how to enable monitoring of your own application services in OpenShift Monitoring. Prerequisites You must have cluster administrator access to an OpenShift cluster on which the Camel K Operator is installed. See Installing Camel K . You must have already enabled monitoring of your own services in OpenShift. See Enabling user workload monitoring in OpenShift . Procedure Create a PodMonitor resource targeting the operator metrics endpoint, so that the Prometheus server can scrape the metrics exposed by the operator. operator-pod-monitor.yaml apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: camel-k-operator labels: app: "camel-k" camel.apache.org/component: operator spec: selector: matchLabels: app: "camel-k" camel.apache.org/component: operator podMetricsEndpoints: - port: metrics Create PodMonitor resource. Additional Resources For more information about the discovery mechanism and the relationship between the operator resources see Prometheus Operator getting started guide . In case your operator metrics are not discovered, you can find more information in Troubleshooting ServiceMonitor changes , which also applies to PodMonitor resources troubleshooting. 3.3. Camel K operator alerts You can create a PrometheusRule resource so that the AlertManager instance from the OpenShift monitoring stack can trigger alerts, based on the metrics exposed by the Camel K operator. Example You can create a PrometheusRule resource with alerting rules based on the exposed metrics as shown below. apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: camel-k-operator spec: groups: - name: camel-k-operator rules: - alert: CamelKReconciliationDuration expr: | ( 1 - sum(rate(camel_k_reconciliation_duration_seconds_bucket{le="0.5"}[5m])) by (job) / sum(rate(camel_k_reconciliation_duration_seconds_count[5m])) by (job) ) * 100 > 10 for: 1m labels: severity: warning annotations: message: | {{ printf "%0.0f" USDvalue }}% of the reconciliation requests for {{ USDlabels.job }} have their duration above 0.5s. - alert: CamelKReconciliationFailure expr: | sum(rate(camel_k_reconciliation_duration_seconds_count{result="Errored"}[5m])) by (job) / sum(rate(camel_k_reconciliation_duration_seconds_count[5m])) by (job) * 100 > 1 for: 10m labels: severity: warning annotations: message: | {{ printf "%0.0f" USDvalue }}% of the reconciliation requests for {{ USDlabels.job }} have failed. - alert: CamelKSuccessBuildDuration2m expr: | ( 1 - sum(rate(camel_k_build_duration_seconds_bucket{le="120",result="Succeeded"}[5m])) by (job) / sum(rate(camel_k_build_duration_seconds_count{result="Succeeded"}[5m])) by (job) ) * 100 > 10 for: 1m labels: severity: warning annotations: message: | {{ printf "%0.0f" USDvalue }}% of the successful builds for {{ USDlabels.job }} have their duration above 2m. - alert: CamelKSuccessBuildDuration5m expr: | ( 1 - sum(rate(camel_k_build_duration_seconds_bucket{le="300",result="Succeeded"}[5m])) by (job) / sum(rate(camel_k_build_duration_seconds_count{result="Succeeded"}[5m])) by (job) ) * 100 > 1 for: 1m labels: severity: critical annotations: message: | {{ printf "%0.0f" USDvalue }}% of the successful builds for {{ USDlabels.job }} have their duration above 5m. - alert: CamelKBuildFailure expr: | sum(rate(camel_k_build_duration_seconds_count{result="Failed"}[5m])) by (job) / sum(rate(camel_k_build_duration_seconds_count[5m])) by (job) * 100 > 1 for: 10m labels: severity: warning annotations: message: | {{ printf "%0.0f" USDvalue }}% of the builds for {{ USDlabels.job }} have failed. - alert: CamelKBuildError expr: | sum(rate(camel_k_build_duration_seconds_count{result="Error"}[5m])) by (job) / sum(rate(camel_k_build_duration_seconds_count[5m])) by (job) * 100 > 1 for: 10m labels: severity: critical annotations: message: | {{ printf "%0.0f" USDvalue }}% of the builds for {{ USDlabels.job }} have errored. - alert: CamelKBuildQueueDuration1m expr: | ( 1 - sum(rate(camel_k_build_queue_duration_seconds_bucket{le="60"}[5m])) by (job) / sum(rate(camel_k_build_queue_duration_seconds_count[5m])) by (job) ) * 100 > 1 for: 1m labels: severity: warning annotations: message: | {{ printf "%0.0f" USDvalue }}% of the builds for {{ USDlabels.job }} have been queued for more than 1m. - alert: CamelKBuildQueueDuration5m expr: | ( 1 - sum(rate(camel_k_build_queue_duration_seconds_bucket{le="300"}[5m])) by (job) / sum(rate(camel_k_build_queue_duration_seconds_count[5m])) by (job) ) * 100 > 1 for: 1m labels: severity: critical annotations: message: | {{ printf "%0.0f" USDvalue }}% of the builds for {{ USDlabels.job }} have been queued for more than 5m. Camel K operator alerts Following table shows the alerting rules that are defined in the PrometheusRule resource. Name Severity Description CamelKReconciliationDuration warning More than 10% of the reconciliation requests have their duration above 0.5s over at least 1 min. CamelKReconciliationFailure warning More than 1% of the reconciliation requests have failed over at least 10 min. CamelKSuccessBuildDuration2m warning More than 10% of the successful builds have their duration above 2 min over at least 1 min. CamelKSuccessBuildDuration5m critical More than 1% of the successful builds have their duration above 5 min over at least 1 min. CamelKBuildError critical More than 1% of the builds have errored over at least 10 min. CamelKBuildQueueDuration1m warning More than 1% of the builds have been queued for more than 1 min over at least 1 min. CamelKBuildQueueDuration5m critical More than 1% of the builds have been queued for more than 5 min over at least 1 min. You can find more information about alerts in Creating alerting rules from the OpenShift documentation. | [
"apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: camel-k-operator labels: app: \"camel-k\" camel.apache.org/component: operator spec: selector: matchLabels: app: \"camel-k\" camel.apache.org/component: operator podMetricsEndpoints: - port: metrics",
"apply -f operator-pod-monitor.yaml",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: camel-k-operator spec: groups: - name: camel-k-operator rules: - alert: CamelKReconciliationDuration expr: | ( 1 - sum(rate(camel_k_reconciliation_duration_seconds_bucket{le=\"0.5\"}[5m])) by (job) / sum(rate(camel_k_reconciliation_duration_seconds_count[5m])) by (job) ) * 100 > 10 for: 1m labels: severity: warning annotations: message: | {{ printf \"%0.0f\" USDvalue }}% of the reconciliation requests for {{ USDlabels.job }} have their duration above 0.5s. - alert: CamelKReconciliationFailure expr: | sum(rate(camel_k_reconciliation_duration_seconds_count{result=\"Errored\"}[5m])) by (job) / sum(rate(camel_k_reconciliation_duration_seconds_count[5m])) by (job) * 100 > 1 for: 10m labels: severity: warning annotations: message: | {{ printf \"%0.0f\" USDvalue }}% of the reconciliation requests for {{ USDlabels.job }} have failed. - alert: CamelKSuccessBuildDuration2m expr: | ( 1 - sum(rate(camel_k_build_duration_seconds_bucket{le=\"120\",result=\"Succeeded\"}[5m])) by (job) / sum(rate(camel_k_build_duration_seconds_count{result=\"Succeeded\"}[5m])) by (job) ) * 100 > 10 for: 1m labels: severity: warning annotations: message: | {{ printf \"%0.0f\" USDvalue }}% of the successful builds for {{ USDlabels.job }} have their duration above 2m. - alert: CamelKSuccessBuildDuration5m expr: | ( 1 - sum(rate(camel_k_build_duration_seconds_bucket{le=\"300\",result=\"Succeeded\"}[5m])) by (job) / sum(rate(camel_k_build_duration_seconds_count{result=\"Succeeded\"}[5m])) by (job) ) * 100 > 1 for: 1m labels: severity: critical annotations: message: | {{ printf \"%0.0f\" USDvalue }}% of the successful builds for {{ USDlabels.job }} have their duration above 5m. - alert: CamelKBuildFailure expr: | sum(rate(camel_k_build_duration_seconds_count{result=\"Failed\"}[5m])) by (job) / sum(rate(camel_k_build_duration_seconds_count[5m])) by (job) * 100 > 1 for: 10m labels: severity: warning annotations: message: | {{ printf \"%0.0f\" USDvalue }}% of the builds for {{ USDlabels.job }} have failed. - alert: CamelKBuildError expr: | sum(rate(camel_k_build_duration_seconds_count{result=\"Error\"}[5m])) by (job) / sum(rate(camel_k_build_duration_seconds_count[5m])) by (job) * 100 > 1 for: 10m labels: severity: critical annotations: message: | {{ printf \"%0.0f\" USDvalue }}% of the builds for {{ USDlabels.job }} have errored. - alert: CamelKBuildQueueDuration1m expr: | ( 1 - sum(rate(camel_k_build_queue_duration_seconds_bucket{le=\"60\"}[5m])) by (job) / sum(rate(camel_k_build_queue_duration_seconds_count[5m])) by (job) ) * 100 > 1 for: 1m labels: severity: warning annotations: message: | {{ printf \"%0.0f\" USDvalue }}% of the builds for {{ USDlabels.job }} have been queued for more than 1m. - alert: CamelKBuildQueueDuration5m expr: | ( 1 - sum(rate(camel_k_build_queue_duration_seconds_bucket{le=\"300\"}[5m])) by (job) / sum(rate(camel_k_build_queue_duration_seconds_count[5m])) by (job) ) * 100 > 1 for: 1m labels: severity: critical annotations: message: | {{ printf \"%0.0f\" USDvalue }}% of the builds for {{ USDlabels.job }} have been queued for more than 5m."
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/developing_and_managing_integrations_using_camel_k/monitoring-camel-k-operator |
Chapter 26. Exchange Property | Chapter 26. Exchange Property Overview The exchange property language provides a convenient way of accessing exchange properties . When you supply a key that matches one of the exchange property names, the exchange property language returns the corresponding value. The exchange property language is part of camel-core . XML example For example, to implement the recipient list pattern when the listOfEndpoints exchange property contains the recipient list, you could define a route as follows: Java example The same recipient list example can be implemented in Java as follows: | [
"<camelContext> <route> <from uri=\"direct:a\"/> <recipientList> <exchangeProperty>listOfEndpoints</exchangeProperty> </recipientList> </route> </camelContext>",
"from(\"direct:a\").recipientList(exchangeProperty(\"listOfEndpoints\"));"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/Property |
5.2. Preparing to convert a physical machine | 5.2. Preparing to convert a physical machine Before you use P2V, you must first prepare your conversion server and download and prepare the rhel-6.x-p2v.iso boot media. For full instructions see the Red Hat Enterprise Linux Installation Guide . Note that there is one ISO image for both i386 and x86_64 architectures. 5.2.1. Install virt-v2v on a conversion server A conversion server is any physical server running Red Hat Enterprise Linux 6 or higher with the virt-v2v package installed on it. To install virt-v2v follow the instructions in Chapter 2, Installing virt-v2v . virt-v2v version 0.8.7-6 or higher is required. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/p2v_migration_moving_workloads_from_physical_to_virtual_machines-preparation_before_the_p2v_migration |
Monitoring APIs | Monitoring APIs OpenShift Container Platform 4.14 Reference guide for monitoring APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/monitoring_apis/index |
Chapter 15. Managing Packages | Chapter 15. Managing Packages You can use Satellite to install, upgrade, and remove packages on hosts. 15.1. Enabling and Disabling Repositories on Hosts Use this procedure to enable or disable repositories on hosts. Procedure In the Satellite web UI, navigate to Hosts > All Hosts , Select the host name. Click the Content tab. Click the Repository Sets tab. Click the vertical ellipsis. Choose Override to disabled or Override to enabled to disable or enable repositories on hosts. 15.2. Installing Packages on a Host Use this procedure to install packages on a host using the Satellite web UI. The list of packages available for installation depends on the Content View and Lifecycle Environment assigned to the host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select the host you want to install packages on. Select the Content tab, then select the Packages tab. Click the vertical ellipsis at the top of the page and select Install Packages . In the Install packages popup window, select the packages that you want to install on the host. Click Install . By default, the packages are installed using remote execution. 15.3. Upgrading Packages on a Host Use this procedure to upgrade packages on a host using the Satellite web UI. The packages are upgraded through Katello agent or remote execution, depending on your configuration. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. Select the Content tab, then select the Packages tab. Select the Upgradable filter from the Status box to list the upgradable packages. If a package has more than one available upgrade version, only the latest upgradable version is displayed. Click Upgrade . The remote execution job starts immediately. You can also customize the remote execution by selecting Upgrade via customized remote execution from the dropdown menu. 15.4. Removing Packages from a Host Use this procedure to remove packages from a host using the Satellite web UI. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select the host you want to remove packages from. Select the Content tab, then select the Packages tab. Check the packages you want to remove. From the vertical ellipsis at the top, click Remove . You get a REX job notification once the packages are removed. | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_hosts/managing-packages_managing-hosts |
Chapter 3. Completing post-installation tasks | Chapter 3. Completing post-installation tasks This section describes how to complete the post-installation tasks. 3.1. Registering your system This section explains how to register your RHEL server to Red Hat Satellite. Note Different steps apply if your system is registered to the Red Hat Customer Portal or your Cloud provider. Prerequisites You must have a valid Red Hat Enterprise Linux for SAP Solutions subscription so your server has access to required packages via a Red Hat Satellite server, the Red Hat Customer Portal, or your Cloud provider. You must have the following information provided to you by your Satellite administrator: An activation key. A string representing the name of the organization. A URL for the Katello client package. You have system administrator access. Procedure Download the Katello client rpm package: Replace the URL with the URL provided by your Satellite administrator. Install the Katello client rpm package: Replace the package name with the name of the package you downloaded. Register your system: Replace your-organization-name with the string representing the name of the organization and replace your-activation-key with the activation key. Both are provided by your Satellite administrator. 3.2. Applying the RHEL release lock For RHEL systems running the SAP HANA database, it is essential that you set the RHEL release lock so that the system remains on the correct RHEL minor release even when doing package updates. Otherwise, the system might be updated to a RHEL release which is not supported by SAP. For RHEL systems not running the SAP HANA database, any RHEL 9 minor release can be used, so applying the RHEL release lock is not necessary in this case. Prerequisites You have system administrator access. Procedure Clear the dnf cache: Set the release lock: Replace 9.x with the supported minor release of RHEL 9 (for example 9.4 ). Additional resources How to tie a system to a specific update of RHEL 3.3. Enabling required repositories You need to enable certain RHEL repositories to have access to packages required for the SAP HANA installation. For more information on which repositories to enable, see RHEL for SAP Subscriptions and Repositories . Prerequisites You have system administrator access. Procedure Disable all repositories and enable the required ones. For systems running the SAP HANA database, enable the e4s repos after ensuring that the RHEL release lock is set properly (example for RHEL 9.4): Note If you intend to use the system for the SAP HANA database only, enabling the sap-netweaver-e4s-rpms repository is not required. For systems running the SAP Application Platform only, if you do not want to restrict your system to a specific RHEL minor release when updating packages, enable the normal repos. In this case, verify that no RHEL release lock is set. Additional resources How to Subscribe to Update Services for SAP Solutions on RHEL 8 and RHEL 9 | [
"wget https://sat.int.example.com/pub/katello-ca-consumer-latest.noarch.rpm",
"dnf install -y katello-ca-consumer-latest.noarch.rpm",
"subscription-manager register --org=\"your-organization-name\" --activationkey=\"your-activation-key\"",
"rm -rf /var/cache/dnf",
"subscription-manager release --set=9.x",
"subscription-manager release Release: 9.4 subscription-manager repos --disable=\\ * --enable=\"rhel-9-for-USD(uname -m)-baseos-e4s-rpms\" --enable=\"rhel-9-for-USD(uname -m)-appstream-e4s-rpms\" --enable=\"rhel-9-for-USD(uname -m)-sap-solutions-e4s-rpms\" --enable=\"rhel-9-for-USD(uname -m)-sap-netweaver-e4s-rpms\""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/installing_rhel_9_for_sap_solutions/proc_completing_post-installation_tasks_configuring-rhel-9-for-sap-hana2-installation |
2.5.2. top | 2.5.2. top While free displays only memory-related information, the top command does a little bit of everything. CPU utilization, process statistics, memory utilization -- top monitors it all. In addition, unlike the free command, top 's default behavior is to run continuously; there is no need to use the watch command. Here is a sample display: The display is divided into two sections. The top section contains information related to overall system status -- uptime, load average, process counts, CPU status, and utilization statistics for both memory and swap space. The lower section displays process-level statistics. It is possible to change what is displayed while top is running. For example, top by default displays both idle and non-idle processes. To display only non-idle processes, press i ; a second press returns to the default display mode. Warning Although top appears like a simple display-only program, this is not the case. That is because top uses single character commands to perform various operations. For example, if you are logged in as root, it is possible to change the priority and even kill any process on your system. Therefore, until you have reviewed top 's help screen (type ? to display it), it is safest to only type q (which exits top ). 2.5.2.1. The GNOME System Monitor -- A Graphical top If you are more comfortable with graphical user interfaces, the GNOME System Monitor may be more to your liking. Like top , the GNOME System Monitor displays information related to overall system status, process counts, memory and swap utilization, and process-level statistics. However, the GNOME System Monitor goes a step further by also including graphical representations of CPU, memory, and swap utilization, along with a tabular disk space utilization listing. An example of the GNOME System Monitor 's Process Listing display appears in Figure 2.1, "The GNOME System Monitor Process Listing Display" . Figure 2.1. The GNOME System Monitor Process Listing Display Additional information can be displayed for a specific process by first clicking on the desired process and then clicking on the More Info button. To display the CPU, memory, and disk usage statistics, click on the System Monitor tab. | [
"14:06:32 up 4 days, 21:20, 4 users, load average: 0.00, 0.00, 0.00 77 processes: 76 sleeping, 1 running, 0 zombie, 0 stopped CPU states: cpu user nice system irq softirq iowait idle total 19.6% 0.0% 0.0% 0.0% 0.0% 0.0% 180.2% cpu00 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 100.0% cpu01 19.6% 0.0% 0.0% 0.0% 0.0% 0.0% 80.3% Mem: 1028548k av, 716604k used, 311944k free, 0k shrd, 131056k buff 324996k actv, 108692k in_d, 13988k in_c Swap: 1020116k av, 5276k used, 1014840k free 382228k cached PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND 17578 root 15 0 13456 13M 9020 S 18.5 1.3 26:35 1 rhn-applet-gu 19154 root 20 0 1176 1176 892 R 0.9 0.1 0:00 1 top 1 root 15 0 168 160 108 S 0.0 0.0 0:09 0 init 2 root RT 0 0 0 0 SW 0.0 0.0 0:00 0 migration/0 3 root RT 0 0 0 0 SW 0.0 0.0 0:00 1 migration/1 4 root 15 0 0 0 0 SW 0.0 0.0 0:00 0 keventd 5 root 34 19 0 0 0 SWN 0.0 0.0 0:00 0 ksoftirqd/0 6 root 35 19 0 0 0 SWN 0.0 0.0 0:00 1 ksoftirqd/1 9 root 15 0 0 0 0 SW 0.0 0.0 0:07 1 bdflush 7 root 15 0 0 0 0 SW 0.0 0.0 1:19 0 kswapd 8 root 15 0 0 0 0 SW 0.0 0.0 0:14 1 kscand 10 root 15 0 0 0 0 SW 0.0 0.0 0:03 1 kupdated 11 root 25 0 0 0 0 SW 0.0 0.0 0:00 0 mdrecoveryd"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-resource-tools-top |
Chapter 3. Registering the system for updates using GNOME | Chapter 3. Registering the system for updates using GNOME You must register your system to get software updates for your system. This section explains how you can register your system using GNOME. Prerequisites A valid account with Red Hat customer portal See the Create a Red Hat Login page for new user registration. Activation Key or keys, if you are registering the system with activation key A registration server, if you are registering system using the registration server 3.1. Registering a system using an activation key on GNOME Follow the steps in this procedure to register your system with an activation key. You can get the activation key from your organization administrator. Prerequisites Activation key or keys. See the Activation Keys page for creating new activation keys. Procedure Open the system menu , which is accessible from the upper-right screen corner, and click Settings . Go to About Subscription . If you are not using the Red Hat server: In the Registration Server section, select Custom Address . Enter the server address in the URL field. In the Registration Type section, select Activation Keys . Under Registration Details : Enter your activation keys in the Activation Keys field. Separate your keys by a comma ( , ). Enter the name or ID of your organization in the Organization field. Click Register . 3.2. Unregistering the system using GNOME Follow the steps in this procedure to unregister your system. After unregistering, your system no longer receives software updates. Procedure Open the system menu , which is accessible from the upper-right screen corner, and click Settings . Go to About Subscription . The Registration Details screen appears. Click Unregister . A warning appears about the impact of unregistering the system. Click Unregister . 3.3. Additional resources Registering the system and managing subscriptions Creating Red Hat Customer Portal Activation Keys (Red Hat Knowledgebase) Creating and managing activation keys Registering Systems with Activation keys (Red Hat Knowledgebase) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/administering_the_system_using_the_gnome_desktop_environment/registering-the-system-for-updates-using-gnome_administering-the-system-using-the-gnome-desktop-environment |
Chapter 6. Proof of concept deployment using SSL/TLS certificates | Chapter 6. Proof of concept deployment using SSL/TLS certificates Use the following sections to configure a proof of concept Red Hat Quay deployment with SSL/TLS certificates. 6.1. Using SSL/TLS To configure Red Hat Quay with a self-signed certificate, you must create a Certificate Authority (CA) and a primary key file named ssl.cert and ssl.key . 6.1.1. Creating a Certificate Authority To configure Red Hat Quay with a self-signed certificate, you must first create a Certificate Authority (CA). Use the following procedure to create a Certificate Authority (CA). Procedure Generate the root CA key by entering the following command: USD openssl genrsa -out rootCA.key 2048 Generate the root CA certificate by entering the following command: USD openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem Enter the information that will be incorporated into your certificate request, including the server hostname, for example: Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com 6.1.1.1. Signing the certificate Use the following procedure to sign the certificate. Procedure Generate the server key by entering the following command: USD openssl genrsa -out ssl.key 2048 Generate a signing request by entering the following command: USD openssl req -new -key ssl.key -out ssl.csr Enter the information that will be incorporated into your certificate request, including the server hostname, for example: Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Email Address []: Create a configuration file openssl.cnf , specifying the server hostname, for example: openssl.cnf [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = <quay-server.example.com> IP.1 = 192.168.1.112 Use the configuration file to generate the certificate ssl.cert : USD openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf 6.2. Configuring SSL/TLS SSL/TLS must be configured by using the command-line interface (CLI) and updating your config.yaml file manually. 6.2.1. Configuring SSL/TLS using the command line interface Use the following procedure to configure SSL/TLS using the CLI. Prerequisites You have created a certificate authority and signed the certificate. Procedure Copy the certificate file and primary key file to your configuration directory, ensuring they are named ssl.cert and ssl.key respectively: cp ~/ssl.cert ~/ssl.key USDQUAY/config Change into the USDQUAY/config directory by entering the following command: USD cd USDQUAY/config Edit the config.yaml file and specify that you want Red Hat Quay to handle TLS/SSL: config.yaml ... SERVER_HOSTNAME: quay-server.example.com ... PREFERRED_URL_SCHEME: https ... Optional: Append the contents of the rootCA.pem file to the end of the ssl.cert file by entering the following command: USD cat rootCA.pem >> ssl.cert Stop the Quay container by entering the following command: USD sudo podman stop quay Restart the registry by entering the following command: 6.3. Testing the SSL/TLS configuration Your SSL/TLS configuration can be tested by using the command-line interface (CLI). Use the following procedure to test your SSL/TLS configuration. 6.3.1. Testing the SSL/TLS configuration using the CLI Your SSL/TLS configuration can be tested by using the command-line interface (CLI). Use the following procedure to test your SSL/TLS configuration. Use the following procedure to test your SSL/TLS configuration using the CLI. Procedure Enter the following command to attempt to log in to the Red Hat Quay registry with SSL/TLS enabled: USD sudo podman login quay-server.example.com Example output Error: error authenticating creds for "quay-server.example.com": error pinging docker registry quay-server.example.com: Get "https://quay-server.example.com/v2/": x509: certificate signed by unknown authority Because Podman does not trust self-signed certificates, you must use the --tls-verify=false option: USD sudo podman login --tls-verify=false quay-server.example.com Example output Login Succeeded! In a subsequent section, you will configure Podman to trust the root Certificate Authority. 6.3.2. Testing the SSL/TLS configuration using a browser Use the following procedure to test your SSL/TLS configuration using a browser. Procedure Navigate to your Red Hat Quay registry endpoint, for example, https://quay-server.example.com . If configured correctly, the browser warns of the potential risk: Proceed to the log in screen. The browser notifies you that the connection is not secure. For example: In the following section, you will configure Podman to trust the root Certificate Authority. 6.4. Configuring Podman to trust the Certificate Authority Podman uses two paths to locate the Certificate Authority (CA) file: /etc/containers/certs.d/ and /etc/docker/certs.d/ . Use the following procedure to configure Podman to trust the CA. Procedure Copy the root CA file to one of /etc/containers/certs.d/ or /etc/docker/certs.d/ . Use the exact path determined by the server hostname, and name the file ca.crt : USD sudo cp rootCA.pem /etc/containers/certs.d/quay-server.example.com/ca.crt Verify that you no longer need to use the --tls-verify=false option when logging in to your Red Hat Quay registry: USD sudo podman login quay-server.example.com Example output Login Succeeded! 6.5. Configuring the system to trust the certificate authority Use the following procedure to configure your system to trust the certificate authority. Procedure Enter the following command to copy the rootCA.pem file to the consolidated system-wide trust store: USD sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/ Enter the following command to update the system-wide trust store configuration: USD sudo update-ca-trust extract Optional. You can use the trust list command to ensure that the Quay server has been configured: USD trust list | grep quay label: quay-server.example.com Now, when you browse to the registry at https://quay-server.example.com , the lock icon shows that the connection is secure: To remove the rootCA.pem file from system-wide trust, delete the file and update the configuration: USD sudo rm /etc/pki/ca-trust/source/anchors/rootCA.pem USD sudo update-ca-trust extract USD trust list | grep quay More information can be found in the RHEL 9 documentation in the chapter Using shared system certificates . | [
"openssl genrsa -out rootCA.key 2048",
"openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem",
"Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com",
"openssl genrsa -out ssl.key 2048",
"openssl req -new -key ssl.key -out ssl.csr",
"Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Email Address []:",
"[req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = <quay-server.example.com> IP.1 = 192.168.1.112",
"openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf",
"cp ~/ssl.cert ~/ssl.key USDQUAY/config",
"cd USDQUAY/config",
"SERVER_HOSTNAME: quay-server.example.com PREFERRED_URL_SCHEME: https",
"cat rootCA.pem >> ssl.cert",
"sudo podman stop quay",
"sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.12.8",
"sudo podman login quay-server.example.com",
"Error: error authenticating creds for \"quay-server.example.com\": error pinging docker registry quay-server.example.com: Get \"https://quay-server.example.com/v2/\": x509: certificate signed by unknown authority",
"sudo podman login --tls-verify=false quay-server.example.com",
"Login Succeeded!",
"sudo cp rootCA.pem /etc/containers/certs.d/quay-server.example.com/ca.crt",
"sudo podman login quay-server.example.com",
"Login Succeeded!",
"sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/",
"sudo update-ca-trust extract",
"trust list | grep quay label: quay-server.example.com",
"sudo rm /etc/pki/ca-trust/source/anchors/rootCA.pem",
"sudo update-ca-trust extract",
"trust list | grep quay"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/proof_of_concept_-_deploying_red_hat_quay/advanced-quay-poc-deployment |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate and prioritize your feedback regarding our documentation. Provide as much detail as possible, so that your request can be quickly addressed. Prerequisites You are logged in to the Red Hat Customer Portal. Procedure To provide feedback, perform the following steps: Click the following link: Create Issue . Describe the issue or enhancement in the Summary text box. Provide details about the issue or requested enhancement in the Description text box. Type your name in the Reporter text box. Click the Create button. This action creates a documentation ticket and routes it to the appropriate documentation team. Thank you for taking the time to provide feedback. | null | https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/getting_started_with_cost_management/proc-providing-feedback-on-redhat-documentation |
3.2.2. Direct Routing and iptables | 3.2.2. Direct Routing and iptables You may also work around the ARP issue using the direct routing method by creating iptables firewall rules. To configure direct routing using iptables , you must add rules that create a transparent proxy so that a real server will service packets sent to the VIP address, even though the VIP address does not exist on the system. The iptables method is simpler to configure than the arptables_jf method. This method also circumvents the LVS ARP issue entirely, because the virtual IP address(es) only exist on the active LVS director. However, there are performance issues using the iptables method compared to arptables_jf , as there is overhead in forwarding/masquerading every packet. You also cannot reuse ports using the iptables method. For example, it is not possible to run two separate Apache HTTP Server services bound to port 80, because both must bind to INADDR_ANY instead of the virtual IP addresses. To configure direct routing using the iptables method, perform the following steps: On each real server, run the following command for every VIP, port, and protocol (TCP or UDP) combination intended to be serviced for the real server: iptables -t nat -A PREROUTING -p <tcp|udp> -d <vip> --dport <port> -j REDIRECT This command will cause the real servers to process packets destined for the VIP and port that they are given. Save the configuration on each real server: The commands above cause the system to reload the iptables configuration on bootup - before the network is started. | [
"service iptables save chkconfig --level 2345 iptables on"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s2-lvs-direct-iptables-VSA |
Chapter 16. Configuring the Squid Caching Proxy Server | Chapter 16. Configuring the Squid Caching Proxy Server Squid is a proxy server that caches content to reduce bandwidth and load web pages more quickly. This chapter describes how to set up Squid as a proxy for the HTTP, HTTPS, and FTP protocol, as well as authentication and restricting access. 16.1. Setting up Squid as a Caching Proxy Without Authentication This section describes a basic configuration of Squid as a caching proxy without authentication. The procedure limits access to the proxy based on IP ranges. Prerequisites The procedure assumes that the /etc/squid/squid.conf file is as provided by the squid package. If you edited this file before, remove the file and reinstall the package. Procedure Install the squid package: Edit the /etc/squid/squid.conf file: Adapt the localnet access control lists (ACL) to match the IP ranges that should be allowed to use the proxy: By default, the /etc/squid/squid.conf file contains the http_access allow localnet rule that allows using the proxy from all IP ranges specified in localnet ACLs. Note that you must specify all localnet ACLs before the http_access allow localnet rule. Important Remove all existing acl localnet entries that do not match your environment. The following ACL exists in the default configuration and defines 443 as a port that uses the HTTPS protocol: If users should be able to use the HTTPS protocol also on other ports, add an ACL for each of these port: Update the list of acl Safe_ports rules to configure to which ports Squid can establish a connection. For example, to configure that clients using the proxy can only access resources on port 21 (FTP), 80 (HTTP), and 443 (HTTPS), keep only the following acl Safe_ports statements in the configuration: By default, the configuration contains the http_access deny !Safe_ports rule that defines access denial to ports that are not defined in Safe_ports ACLs. Configure the cache type, the path to the cache directory, the cache size, and further cache type-specific settings in the cache_dir parameter: With these settings: Squid uses the ufs cache type. Squid stores its cache in the /var/spool/squid/ directory. The cache grows up to 10000 MB. Squid creates 16 level-1 sub-directories in the /var/spool/squid/ directory. Squid creates 256 sub-directories in each level-1 directory. If you do not set a cache_dir directive, Squid stores the cache in memory. If you set a different cache directory than /var/spool/squid/ in the cache_dir parameter: Create the cache directory: Configure the permissions for the cache directory: If you run SELinux in enforcing mode, set the squid_cache_t context for the cache directory: If the semanage utility is not available on your system, install the policycoreutils-python-utils package. Open the 3128 port in the firewall: Start the squid service: Enable the squid service to start automatically when the system boots: Verification Steps To verify that the proxy works correctly, download a web page using the curl utility: If curl does not display any error and the index.html file was downloaded to the current directory, the proxy works. | [
"yum install squid",
"acl localnet src 192.0.2.0/24 acl localnet 2001:db8::/32",
"acl SSL_ports port 443",
"acl SSL_ports port port_number",
"acl Safe_ports port 21 acl Safe_ports port 80 acl Safe_ports port 443",
"cache_dir ufs /var/spool/squid 10000 16 256",
"mkdir -p path_to_cache_directory",
"chown squid:squid path_to_cache_directory",
"semanage fcontext -a -t squid_cache_t \" path_to_cache_directory (/.*)?\" restorecon -Rv path_to_cache_directory",
"firewall-cmd --permanent --add-port=3128/tcp firewall-cmd --reload",
"systemctl start squid",
"systemctl enable squid",
"curl -O -L \" https://www.redhat.com/index.html \" -x \" proxy.example.com : 3128 \""
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/configuring-the-squid-caching-proxy-server |
Storage Strategies Guide | Storage Strategies Guide Red Hat Ceph Storage 8 Creating storage strategies for Red Hat Ceph Storage clusters Red Hat Ceph Storage Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/storage_strategies_guide/index |
Deploying and Managing Streams for Apache Kafka on OpenShift | Deploying and Managing Streams for Apache Kafka on OpenShift Red Hat Streams for Apache Kafka 2.7 Deploy and manage Streams for Apache Kafka 2.7 on OpenShift Container Platform | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: CustomResourceDefinition metadata: 1 name: kafkatopics.kafka.strimzi.io labels: app: strimzi spec: 2 group: kafka.strimzi.io versions: v1beta2 scope: Namespaced names: # singular: kafkatopic plural: kafkatopics shortNames: - kt 3 additionalPrinterColumns: 4 # subresources: status: {} 5 validation: 6 openAPIV3Schema: properties: spec: type: object properties: partitions: type: integer minimum: 1 replicas: type: integer minimum: 1 maximum: 32767 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic 1 metadata: name: my-topic labels: strimzi.io/cluster: my-cluster 2 spec: 3 partitions: 1 replicas: 1 config: retention.ms: 7200000 segment.bytes: 1073741824 status: conditions: 4 lastTransitionTime: \"2019-08-20T11:37:00.706Z\" status: \"True\" type: Ready observedGeneration: 1 /",
"get k NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS my-cluster 3 3",
"get strimzi NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS kafka.kafka.strimzi.io/my-cluster 3 3 NAME PARTITIONS REPLICATION FACTOR kafkatopic.kafka.strimzi.io/kafka-apps 3 3 NAME AUTHENTICATION AUTHORIZATION kafkauser.kafka.strimzi.io/my-user tls simple",
"get strimzi -o name kafka.kafka.strimzi.io/my-cluster kafkatopic.kafka.strimzi.io/kafka-apps kafkauser.kafka.strimzi.io/my-user",
"delete USD(oc get strimzi -o name) kafka.kafka.strimzi.io \"my-cluster\" deleted kafkatopic.kafka.strimzi.io \"kafka-apps\" deleted kafkauser.kafka.strimzi.io \"my-user\" deleted",
"get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"tls\")].bootstrapServers}{\"\\n\"}' my-cluster-kafka-bootstrap.myproject.svc:9093",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: spec: # status: clusterId: XP9FP2P-RByvEy0W4cOEUA 1 conditions: 2 - lastTransitionTime: '2023-01-20T17:56:29.396588Z' status: 'True' type: Ready 3 kafkaMetadataState: KRaft 4 kafkaVersion: 3.7.0 5 kafkaNodePools: 6 - name: broker - name: controller listeners: 7 - addresses: - host: my-cluster-kafka-bootstrap.prm-project.svc port: 9092 bootstrapServers: 'my-cluster-kafka-bootstrap.prm-project.svc:9092' name: plain - addresses: - host: my-cluster-kafka-bootstrap.prm-project.svc port: 9093 bootstrapServers: 'my-cluster-kafka-bootstrap.prm-project.svc:9093' certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: tls - addresses: - host: >- 2054284155.us-east-2.elb.amazonaws.com port: 9095 bootstrapServers: >- 2054284155.us-east-2.elb.amazonaws.com:9095 certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external3 - addresses: - host: ip-10-0-172-202.us-east-2.compute.internal port: 31644 bootstrapServers: 'ip-10-0-172-202.us-east-2.compute.internal:31644' certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external4 observedGeneration: 3 8 operatorLastSuccessfulVersion: 2.7 9",
"get kafka <kafka_resource_name> -o jsonpath='{.status}' | jq",
"sed -i 's/namespace: .*/namespace: <my_namespace>/' install/cluster-operator/*RoleBinding*.yaml",
"create secret docker-registry <pull_secret_name> --docker-server=registry.redhat.io --docker-username=<user_name> --docker-password=<password> --docker-email=<email>",
"apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator spec: # template: spec: serviceAccountName: strimzi-cluster-operator containers: # env: - name: STRIMZI_IMAGE_PULL_SECRETS value: \"<pull_secret_name>\"",
"create -f install/strimzi-admin",
"create clusterrolebinding strimzi-admin --clusterrole=strimzi-admin --user= user1 --user= user2",
"sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml",
"sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml",
"create -f install/cluster-operator -n my-cluster-operator-namespace",
"get deployments -n my-cluster-operator-namespace",
"NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1",
"sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml",
"sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml",
"apiVersion: apps/v1 kind: Deployment spec: # template: spec: serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.7.0 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: watched-namespace-1,watched-namespace-2,watched-namespace-3",
"create -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> create -f install/cluster-operator/023-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> create -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n <watched_namespace>",
"create -f install/cluster-operator -n my-cluster-operator-namespace",
"get deployments -n my-cluster-operator-namespace",
"NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1",
"sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml",
"sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml",
"apiVersion: apps/v1 kind: Deployment spec: # template: spec: # serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.7.0 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: \"*\" #",
"create clusterrolebinding strimzi-cluster-operator-namespaced --clusterrole=strimzi-cluster-operator-namespaced --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator create clusterrolebinding strimzi-cluster-operator-watched --clusterrole=strimzi-cluster-operator-watched --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator create clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --clusterrole=strimzi-entity-operator --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator",
"create -f install/cluster-operator -n my-cluster-operator-namespace",
"get deployments -n my-cluster-operator-namespace",
"NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1",
"apply -f examples/kafka/kraft/kafka-with-dual-role-nodes.yaml",
"apply -f examples/kafka/kraft/kafka.yaml",
"apply -f examples/kafka/kraft/kafka-ephemeral.yaml",
"apply -f examples/kafka/kafka-with-node-pools.yaml",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-4 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: version: 3.7.0 # config: # log.message.format.version: \"3.7\" inter.broker.protocol.version: \"3.7\" #",
"apply -f examples/kafka/kafka-ephemeral.yaml",
"apply -f examples/kafka/kafka-persistent.yaml",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 my-cluster-kafka-0 1/1 Running 0 my-cluster-kafka-1 1/1 Running 0 my-cluster-kafka-2 1/1 Running 0 my-cluster-zookeeper-0 1/1 Running 0 my-cluster-zookeeper-1 1/1 Running 0 my-cluster-zookeeper-2 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: topicOperator: {} userOperator: {}",
"apply -f <kafka_configuration_file>",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: topicOperator: {} userOperator: {}",
"apply -f <kafka_configuration_file>",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0",
"exec -ti my-cluster -zookeeper-0 -- bin/zookeeper-shell.sh localhost:12181 ls /",
"apply -f examples/connect/kafka-connect.yaml",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-connect-cluster-connect-<pod_id> 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 # build: output: 2 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 3 - name: connector-1 artifacts: - type: tgz url: <url_to_download_connector_1_artifact> sha512sum: <SHA-512_checksum_of_connector_1_artifact> - name: connector-2 artifacts: - type: jar url: <url_to_download_connector_2_artifact> sha512sum: <SHA-512_checksum_of_connector_2_artifact> #",
"oc apply -f <kafka_connect_configuration_file>",
"FROM registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 USER root:root COPY ./ my-plugins / /opt/kafka/plugins/ USER 1001",
"tree ./ my-plugins / ./ my-plugins / ├── debezium-connector-mongodb │ ├── bson-<version>.jar │ ├── CHANGELOG.md │ ├── CONTRIBUTE.md │ ├── COPYRIGHT.txt │ ├── debezium-connector-mongodb-<version>.jar │ ├── debezium-core-<version>.jar │ ├── LICENSE.txt │ ├── mongodb-driver-core-<version>.jar │ ├── README.md │ └── # ├── debezium-connector-mysql │ ├── CHANGELOG.md │ ├── CONTRIBUTE.md │ ├── COPYRIGHT.txt │ ├── debezium-connector-mysql-<version>.jar │ ├── debezium-core-<version>.jar │ ├── LICENSE.txt │ ├── mysql-binlog-connector-java-<version>.jar │ ├── mysql-connector-java-<version>.jar │ ├── README.md │ └── # └── debezium-connector-postgres ├── CHANGELOG.md ├── CONTRIBUTE.md ├── COPYRIGHT.txt ├── debezium-connector-postgres-<version>.jar ├── debezium-core-<version>.jar ├── LICENSE.txt ├── postgresql-<version>.jar ├── protobuf-java-<version>.jar ├── README.md └── #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 # image: my-new-container-image 2 config: 3 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" spec: #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector 1 labels: strimzi.io/cluster: my-connect-cluster 2 spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector 3 tasksMax: 2 4 autoRestart: 5 enabled: true config: 6 file: \"/opt/kafka/LICENSE\" 7 topic: my-topic 8 #",
"apply -f examples/connect/source-connector.yaml",
"touch examples/connect/sink-connector.yaml",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-sink-connector labels: strimzi.io/cluster: my-connect spec: class: org.apache.kafka.connect.file.FileStreamSinkConnector 1 tasksMax: 2 config: 2 file: \"/tmp/my-file\" 3 topics: my-topic 4",
"apply -f examples/connect/sink-connector.yaml",
"get kctr --selector strimzi.io/cluster=<my_connect_cluster> -o name my-source-connector my-sink-connector",
"exec <my_kafka_cluster>-kafka-0 -i -t -- bin/kafka-console-consumer.sh --bootstrap-server <my_kafka_cluster>-kafka-bootstrap. NAMESPACE .svc:9092 --topic my-topic --from-beginning",
"curl -X POST http://my-connect-cluster-connect-api:8083/connectors -H 'Content-Type: application/json' -d '{ \"name\": \"my-source-connector\", \"config\": { \"connector.class\":\"org.apache.kafka.connect.file.FileStreamSourceConnector\", \"file\": \"/opt/kafka/LICENSE\", \"topic\":\"my-topic\", \"tasksMax\": \"4\", \"type\": \"source\" } }'",
"selector: strimzi.io/cluster: my-connect-cluster 1 strimzi.io/kind: KafkaConnect strimzi.io/name: my-connect-cluster-connect 2 #",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: my-custom-connect-network-policy spec: ingress: - from: - podSelector: 1 matchLabels: app: my-connector-manager ports: - port: 8083 protocol: TCP podSelector: matchLabels: strimzi.io/cluster: my-connect-cluster strimzi.io/kind: KafkaConnect strimzi.io/name: my-connect-cluster-connect policyTypes: - Ingress",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" spec: # jvmOptions: javaSystemProperties: - name: org.apache.kafka.disallowed.login.modules value: com.sun.security.auth.module.JndiLoginModule, org.apache.kafka.common.security.kerberos.KerberosLoginModule",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: connector.client.config.override.policy: None",
"apply -f examples/mirror-maker/kafka-mirror-maker.yaml",
"apply -f examples/mirror-maker/kafka-mirror-maker-2.yaml",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-mirror-maker-mirror-maker-<pod_id> 1/1 Running 1 my-mm2-cluster-mirrormaker2-<pod_id> 1/1 Running 1",
"apply -f examples/bridge/kafka-bridge.yaml",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-bridge-bridge-<pod_id> 1/1 Running 0",
"get pods -o name pod/kafka-consumer pod/my-bridge-bridge-<pod_id>",
"port-forward pod/my-bridge-bridge-<pod_id> 8080:8080 &",
"selector: strimzi.io/cluster: kafka-bridge-name 1 strimzi.io/kind: KafkaBridge #",
"apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-topic-operator labels: app: strimzi spec: # template: # spec: # containers: - name: strimzi-topic-operator # env: - name: STRIMZI_NAMESPACE 1 valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2 value: my-kafka-bootstrap-address:9092 - name: STRIMZI_RESOURCE_LABELS 3 value: \"strimzi.io/cluster=my-cluster\" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 4 value: \"120000\" - name: STRIMZI_LOG_LEVEL 5 value: INFO - name: STRIMZI_TLS_ENABLED 6 value: \"false\" - name: STRIMZI_JAVA_OPTS 7 value: \"-Xmx=512M -Xms=256M\" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 8 value: \"-Djavax.net.debug=verbose -DpropertyName=value\" - name: STRIMZI_PUBLIC_CA 9 value: \"false\" - name: STRIMZI_TLS_AUTH_ENABLED 10 value: \"false\" - name: STRIMZI_SASL_ENABLED 11 value: \"false\" - name: STRIMZI_SASL_USERNAME 12 value: \"admin\" - name: STRIMZI_SASL_PASSWORD 13 value: \"password\" - name: STRIMZI_SASL_MECHANISM 14 value: \"scram-sha-512\" - name: STRIMZI_SECURITY_PROTOCOL 15 value: \"SSL\" - name: STRIMZI_USE_FINALIZERS value: \"false\" 16",
". env: - name: STRIMZI_TRUSTSTORE_LOCATION 1 value: \"/path/to/truststore.p12\" - name: STRIMZI_TRUSTSTORE_PASSWORD 2 value: \" TRUSTSTORE-PASSWORD \" - name: STRIMZI_KEYSTORE_LOCATION 3 value: \"/path/to/keystore.p12\" - name: STRIMZI_KEYSTORE_PASSWORD 4 value: \" KEYSTORE-PASSWORD \"",
"get deployments",
"NAME READY UP-TO-DATE AVAILABLE strimzi-topic-operator 1/1 1 1",
"apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-topic-operator labels: app: strimzi spec: # template: # spec: # containers: - name: strimzi-topic-operator # env: - name: STRIMZI_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS value: my-kafka-bootstrap-address:9092 - name: STRIMZI_RESOURCE_LABELS value: \"strimzi.io/cluster=my-cluster\" - name: STRIMZI_ZOOKEEPER_CONNECT 1 value: my-cluster-zookeeper-client:2181 - name: STRIMZI_ZOOKEEPER_SESSION_TIMEOUT_MS 2 value: \"18000\" - name: STRIMZI_TOPIC_METADATA_MAX_ATTEMPTS 3 value: \"6\" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS value: \"120000\" - name: STRIMZI_LOG_LEVEL value: INFO - name: STRIMZI_TLS_ENABLED value: \"false\" - name: STRIMZI_JAVA_OPTS value: \"-Xmx=512M -Xms=256M\" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES value: \"-Djavax.net.debug=verbose -DpropertyName=value\" - name: STRIMZI_PUBLIC_CA value: \"false\" - name: STRIMZI_TLS_AUTH_ENABLED value: \"false\" - name: STRIMZI_SASL_ENABLED value: \"false\" - name: STRIMZI_SASL_USERNAME value: \"admin\" - name: STRIMZI_SASL_PASSWORD value: \"password\" - name: STRIMZI_SASL_MECHANISM value: \"scram-sha-512\" - name: STRIMZI_SECURITY_PROTOCOL value: \"SSL\"",
"apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-user-operator labels: app: strimzi spec: # template: # spec: # containers: - name: strimzi-user-operator # env: - name: STRIMZI_NAMESPACE 1 valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2 value: my-kafka-bootstrap-address:9092 - name: STRIMZI_CA_CERT_NAME 3 value: my-cluster-clients-ca-cert - name: STRIMZI_CA_KEY_NAME 4 value: my-cluster-clients-ca - name: STRIMZI_LABELS 5 value: \"strimzi.io/cluster=my-cluster\" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 6 value: \"120000\" - name: STRIMZI_WORK_QUEUE_SIZE 7 value: 10000 - name: STRIMZI_CONTROLLER_THREAD_POOL_SIZE 8 value: 10 - name: STRIMZI_USER_OPERATIONS_THREAD_POOL_SIZE 9 value: 4 - name: STRIMZI_LOG_LEVEL 10 value: INFO - name: STRIMZI_GC_LOG_ENABLED 11 value: \"true\" - name: STRIMZI_CA_VALIDITY 12 value: \"365\" - name: STRIMZI_CA_RENEWAL 13 value: \"30\" - name: STRIMZI_JAVA_OPTS 14 value: \"-Xmx=512M -Xms=256M\" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 15 value: \"-Djavax.net.debug=verbose -DpropertyName=value\" - name: STRIMZI_SECRET_PREFIX 16 value: \"kafka-\" - name: STRIMZI_ACLS_ADMIN_API_SUPPORTED 17 value: \"true\" - name: STRIMZI_MAINTENANCE_TIME_WINDOWS 18 value: '* * 8-10 * * ?;* * 14-15 * * ?' - name: STRIMZI_KAFKA_ADMIN_CLIENT_CONFIGURATION 19 value: | default.api.timeout.ms=120000 request.timeout.ms=60000",
". env: - name: STRIMZI_CLUSTER_CA_CERT_SECRET_NAME 1 value: my-cluster-cluster-ca-cert - name: STRIMZI_EO_KEY_SECRET_NAME 2 value: my-cluster-entity-operator-certs ...\"",
"create -f install/user-operator",
"get deployments",
"NAME READY UP-TO-DATE AVAILABLE strimzi-user-operator 1/1 1 1",
"env: - name: STRIMZI_FEATURE_GATES value: +FeatureGate1,-FeatureGate2",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: controller labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller storage: type: jbod volumes: - id: 0 type: persistent-claim size: 20Gi deleteClaim: false resources: requests: memory: 64Gi cpu: \"8\" limits: memory: 64Gi cpu: \"12\"",
"annotate kafka my-cluster strimzi.io/kraft=\"migration\" --overwrite",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: my-project annotations: strimzi.io/kraft=\"migration\"",
"get pods -n my-project",
"NAME READY STATUS RESTARTS my-cluster-kafka-0 1/1 Running 0 my-cluster-kafka-1 1/1 Running 0 my-cluster-kafka-2 1/1 Running 0 my-cluster-controller-3 1/1 Running 0 my-cluster-controller-4 1/1 Running 0 my-cluster-controller-5 1/1 Running 0",
"get kafka my-cluster -n my-project -w",
"NAME ... METADATA STATE my-cluster ... Zookeeper my-cluster ... KRaftMigration my-cluster ... KRaftDualWriting my-cluster ... KRaftPostMigration",
"annotate kafka my-cluster strimzi.io/kraft=\"enabled\" --overwrite",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: my-project annotations: strimzi.io/kraft=\"enabled\"",
"get kafka my-cluster -n my-project -w",
"NAME ... METADATA STATE my-cluster ... Zookeeper my-cluster ... KRaftMigration my-cluster ... KRaftDualWriting my-cluster ... KRaftPostMigration my-cluster ... PreKRaft my-cluster ... KRaft",
"annotate kafka my-cluster strimzi.io/kraft=\"rollback\" --overwrite",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: my-project annotations: strimzi.io/kraft=\"rollback\"",
"delete KafkaNodePool controller -n my-project",
"annotate kafka my-cluster strimzi.io/kraft=\"disabled\" --overwrite",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: my-project annotations: strimzi.io/kraft=\"disabled\"",
"apply -f <kafka_configuration_file>",
"examples ├── user 1 ├── topic 2 ├── security 3 │ ├── tls-auth │ ├── scram-sha-512-auth │ └── keycloak-authorization ├── mirror-maker 4 ├── metrics 5 ├── kafka 6 │ └── nodepools 7 ├── cruise-control 8 ├── connect 9 └── bridge 10",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 1 version: 3.7.0 2 logging: 3 type: inline loggers: kafka.root.logger.level: INFO resources: 4 requests: memory: 64Gi cpu: \"8\" limits: memory: 64Gi cpu: \"12\" readinessProbe: 5 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 jvmOptions: 6 -Xms: 8192m -Xmx: 8192m image: my-org/my-image:latest 7 listeners: 8 - name: plain 9 port: 9092 10 type: internal 11 tls: false 12 configuration: useServiceDnsDomain: true 13 - name: tls port: 9093 type: internal tls: true authentication: 14 type: tls - name: external1 15 port: 9094 type: route tls: true configuration: brokerCertChainAndKey: 16 secretName: my-secret certificate: my-certificate.crt key: my-key.key authorization: 17 type: simple config: 18 auto.create.topics.enable: \"false\" offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 default.replication.factor: 3 min.insync.replicas: 2 inter.broker.protocol.version: \"3.7\" storage: 19 type: persistent-claim 20 size: 10000Gi rack: 21 topologyKey: topology.kubernetes.io/zone metricsConfig: 22 type: jmxPrometheusExporter valueFrom: configMapKeyRef: 23 name: my-config-map key: my-key # zookeeper: 24 replicas: 3 25 logging: 26 type: inline loggers: zookeeper.root.logger: INFO resources: requests: memory: 8Gi cpu: \"2\" limits: memory: 8Gi cpu: \"2\" jvmOptions: -Xms: 4096m -Xmx: 4096m storage: type: persistent-claim size: 1000Gi metricsConfig: # entityOperator: 27 tlsSidecar: 28 resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: 29 type: inline loggers: rootLogger.level: INFO resources: requests: memory: 512Mi cpu: \"1\" limits: memory: 512Mi cpu: \"1\" userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: 30 type: inline loggers: rootLogger.level: INFO resources: requests: memory: 512Mi cpu: \"1\" limits: memory: 512Mi cpu: \"1\" kafkaExporter: 31 # cruiseControl: 32 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # config: client.quota.callback.class: io.strimzi.kafka.quotas.StaticQuotaCallback 1 client.quota.callback.static.produce: 1000000 2 client.quota.callback.static.fetch: 1000000 3 client.quota.callback.static.storage.soft: 400000000000 4 client.quota.callback.static.storage.hard: 500000000000 5 client.quota.callback.static.storage.check-interval: 5 6",
"apply -f <kafka_configuration_file>",
"annotate pod <cluster_name>-kafka-<index_number> strimzi.io/delete-pod-and-pvc=\"true\"",
"annotate pod <cluster_name>-zookeeper-<index_number> strimzi.io/delete-pod-and-pvc=\"true\"",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: kraft-dual-role 1 labels: strimzi.io/cluster: my-cluster 2 spec: replicas: 3 3 roles: 4 - controller - broker storage: 5 type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false resources: 6 requests: memory: 64Gi cpu: \"8\" limits: memory: 64Gi cpu: \"12\"",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker 1 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false resources: requests: memory: 64Gi cpu: \"8\" limits: memory: 64Gi cpu: \"12\"",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: config: reserved.broker.max.id: 10000 #",
"annotate kafkanodepool pool-a strimzi.io/next-node-ids=\"[0,1,2,10-20,30]\"",
"annotate kafkanodepool pool-b strimzi.io/remove-node-ids=\"[60-50,9,8,7]\"",
"annotate kafkanodepool pool-a strimzi.io/next-node-ids-",
"annotate kafkanodepool pool-b strimzi.io/remove-node-ids-",
"NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0",
"scale kafkanodepool pool-a --replicas=4",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-a-3 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: add-brokers brokers: [3]",
"NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-a-3 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: remove-brokers brokers: [3]",
"scale kafkanodepool pool-a --replicas=3",
"NAME READY STATUS RESTARTS my-cluster-pool-b-kafka-0 1/1 Running 0 my-cluster-pool-b-kafka-1 1/1 Running 0 my-cluster-pool-b-kafka-2 1/1 Running 0",
"scale kafkanodepool pool-a --replicas=4",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-4 1/1 Running 0 my-cluster-pool-a-7 1/1 Running 0 my-cluster-pool-b-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0 my-cluster-pool-b-6 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: remove-brokers brokers: [6]",
"scale kafkanodepool pool-b --replicas=3",
"NAME READY STATUS RESTARTS my-cluster-pool-b-kafka-2 1/1 Running 0 my-cluster-pool-b-kafka-3 1/1 Running 0 my-cluster-pool-b-kafka-5 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 20Gi deleteClaim: false #",
"NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-b labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false #",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-4 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: remove-brokers brokers: [0, 1, 2]",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller storage: type: jbod volumes: - id: 0 type: persistent-claim size: 20Gi deleteClaim: false #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # --- apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-b labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false #",
"NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-4 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false #",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-b-3 1/1 Running 0 my-cluster-pool-b-4 1/1 Running 0 my-cluster-pool-b-5 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: remove-brokers brokers: [3, 4, 5]",
"delete kafkanodepool pool-b -n <my_cluster_operator_namespace>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 500Gi class: gp2-ebs #",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-b labels: strimzi.io/cluster: my-cluster spec: roles: - broker replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 1Ti class: gp3-ebs #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: # spec: mode: remove-brokers brokers: [0, 1, 2]",
"delete kafkanodepool pool-a",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: all-zones provisioner: kubernetes.io/my-storage parameters: type: ssd volumeBindingMode: WaitForFirstConsumer",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-zone-1 labels: strimzi.io/cluster: my-cluster spec: replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 500Gi class: all-zones template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - zone-1 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-zone-2 labels: strimzi.io/cluster: my-cluster spec: replicas: 4 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 500Gi class: all-zones template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - zone-2 #",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-cluster-pool-zone-1-kafka-0 1/1 Running 0 my-cluster-pool-zone-1-kafka-1 1/1 Running 0 my-cluster-pool-zone-1-kafka-2 1/1 Running 0 my-cluster-pool-zone-2-kafka-3 1/1 Running 0 my-cluster-pool-zone-2-kafka-4 1/1 Running 0 my-cluster-pool-zone-2-kafka-5 1/1 Running 0 my-cluster-pool-zone-2-kafka-6 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: kafka labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false",
"apply -f <node_pool_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster annotations: strimzi.io/node-pools: enabled spec: kafka: # zookeeper: #",
"apply -f <kafka_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: topicOperator: {} userOperator: {}",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 resources: requests: cpu: \"1\" memory: 500Mi limits: cpu: \"1\" memory: 500Mi #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # userOperator: watchedNamespace: my-user-namespace reconciliationIntervalSeconds: 60 resources: requests: cpu: \"1\" memory: 500Mi limits: cpu: \"1\" memory: 500Mi #",
"env: - name: STRIMZI_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace",
"env: - name: STRIMZI_OPERATOR_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace",
"env: - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2",
"env: - name: STRIMZI_LABELS_EXCLUSION_PATTERN value: \"^key1.*\"",
"env: - name: STRIMZI_CUSTOM_RESOURCE_SELECTOR value: label1=value1,label2=value2",
"env: - name: STRIMZI_KUBERNETES_VERSION value: | major=1 minor=16 gitVersion=v1.16.2 gitCommit=c97fe5036ef3df2967d086711e6c0c405941e14b gitTreeState=clean buildDate=2019-10-15T19:09:08Z goVersion=go1.12.10 compiler=gc platform=linux/amd64",
"<cluster-name> -kafka-0. <cluster-name> -kafka-brokers. <namespace> .svc. cluster.local",
"# env: # - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2 #",
"# env: # - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS value: \"120000\" #",
"annotate <kind_of_custom_resource> <name_of_custom_resource> strimzi.io/pause-reconciliation=\"true\"",
"annotate KafkaConnect my-connect strimzi.io/pause-reconciliation=\"true\"",
"describe <kind_of_custom_resource> <name_of_custom_resource>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: annotations: strimzi.io/pause-reconciliation: \"true\" strimzi.io/use-connector-resources: \"true\" creationTimestamp: 2021-03-12T10:47:11Z # spec: # status: conditions: - lastTransitionTime: 2021-03-12T10:47:41.689249Z status: \"True\" type: ReconciliationPaused",
"env: - name: STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace",
"env: - name: STRIMZI_LEADER_ELECTION_IDENTITY valueFrom: fieldRef: fieldPath: metadata.name",
"apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator labels: app: strimzi spec: replicas: 3",
"spec containers: - name: strimzi-cluster-operator # env: - name: STRIMZI_LEADER_ELECTION_ENABLED value: \"true\" - name: STRIMZI_LEADER_ELECTION_LEASE_NAME value: \"my-strimzi-cluster-operator\" - name: STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_LEADER_ELECTION_IDENTITY valueFrom: fieldRef: fieldPath: metadata.name",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-leader-election labels: app: strimzi rules: - apiGroups: - coordination.k8s.io resourceNames: - my-strimzi-cluster-operator",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: strimzi-cluster-operator-leader-election labels: app: strimzi subjects: - kind: ServiceAccount name: my-strimzi-cluster-operator namespace: myproject",
"create -f install/cluster-operator -n myproject",
"get deployments -n myproject",
"NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 3/3 3 3",
"apiVersion: apps/v1 kind: Deployment spec: # template: spec: serviceAccountName: strimzi-cluster-operator containers: # env: # - name: \"HTTP_PROXY\" value: \"http://proxy.com\" 1 - name: \"HTTPS_PROXY\" value: \"https://proxy.com\" 2 - name: \"NO_PROXY\" value: \"internal.com, other.domain.com\" 3 #",
"edit deployment strimzi-cluster-operator",
"create -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml",
"apiVersion: apps/v1 kind: Deployment spec: # template: spec: serviceAccountName: strimzi-cluster-operator containers: # env: # - name: \"FIPS_MODE\" value: \"disabled\" 1 #",
"edit deployment strimzi-cluster-operator",
"apply -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect 1 metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" 2 spec: replicas: 3 3 authentication: 4 type: tls certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source bootstrapServers: my-cluster-kafka-bootstrap:9092 5 tls: 6 trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt config: 7 group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 build: 8 output: 9 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 10 - name: connector-1 artifacts: - type: tgz url: <url_to_download_connector_1_artifact> sha512sum: <SHA-512_checksum_of_connector_1_artifact> - name: connector-2 artifacts: - type: jar url: <url_to_download_connector_2_artifact> sha512sum: <SHA-512_checksum_of_connector_2_artifact> externalConfiguration: 11 env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey resources: 12 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi logging: 13 type: inline loggers: log4j.rootLogger: INFO readinessProbe: 14 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metricsConfig: 15 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key jvmOptions: 16 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" image: my-org/my-image:latest 17 rack: topologyKey: topology.kubernetes.io/zone 18 template: 19 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" connectContainer: 20 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry 21",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: config: group.id: my-connect-cluster 1 offset.storage.topic: my-connect-cluster-offsets 2 config.storage.topic: my-connect-cluster-configs 3 status.storage.topic: my-connect-cluster-status 4 # #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # authorization: type: simple acls: # access to offset.storage.topic - resource: type: topic name: connect-cluster-offsets patternType: literal operations: - Create - Describe - Read - Write host: \"*\" # access to status.storage.topic - resource: type: topic name: connect-cluster-status patternType: literal operations: - Create - Describe - Read - Write host: \"*\" # access to config.storage.topic - resource: type: topic name: connect-cluster-configs patternType: literal operations: - Create - Describe - Read - Write host: \"*\" # cluster group - resource: type: group name: connect-cluster patternType: literal operations: - Read host: \"*\"",
"apply -f KAFKA-USER-CONFIG-FILE",
"get KafkaConnector",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector tasksMax: 2 config: file: \"/opt/kafka/LICENSE\" topic: my-topic state: stopped #",
"get KafkaConnector",
"annotate KafkaConnector <kafka_connector_name> strimzi.io/restart=\"true\"",
"get KafkaConnector",
"describe KafkaConnector <kafka_connector_name>",
"annotate KafkaConnector <kafka_connector_name> strimzi.io/restart-task=\"0\"",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.7.0 connectCluster: \"my-cluster-target\" clusters: - alias: \"my-cluster-source\" bootstrapServers: my-cluster-source-kafka-bootstrap:9092 - alias: \"my-cluster-target\" bootstrapServers: my-cluster-target-kafka-bootstrap:9092 mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: {}",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.7.0 1 replicas: 3 2 connectCluster: \"my-cluster-target\" 3 clusters: 4 - alias: \"my-cluster-source\" 5 authentication: 6 certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source type: tls bootstrapServers: my-cluster-source-kafka-bootstrap:9092 7 tls: 8 trustedCertificates: - certificate: ca.crt secretName: my-cluster-source-cluster-ca-cert - alias: \"my-cluster-target\" 9 authentication: 10 certificateAndKey: certificate: target.crt key: target.key secretName: my-user-target type: tls bootstrapServers: my-cluster-target-kafka-bootstrap:9092 11 config: 12 config.storage.replication.factor: 1 offset.storage.replication.factor: 1 status.storage.replication.factor: 1 tls: 13 trustedCertificates: - certificate: ca.crt secretName: my-cluster-target-cluster-ca-cert mirrors: 14 - sourceCluster: \"my-cluster-source\" 15 targetCluster: \"my-cluster-target\" 16 sourceConnector: 17 tasksMax: 10 18 autoRestart: 19 enabled: true config replication.factor: 1 20 offset-syncs.topic.replication.factor: 1 21 sync.topic.acls.enabled: \"false\" 22 refresh.topics.interval.seconds: 60 23 replication.policy.class: \"org.apache.kafka.connect.mirror.IdentityReplicationPolicy\" 24 heartbeatConnector: 25 autoRestart: enabled: true config: heartbeats.topic.replication.factor: 1 26 replication.policy.class: \"org.apache.kafka.connect.mirror.IdentityReplicationPolicy\" checkpointConnector: 27 autoRestart: enabled: true config: checkpoints.topic.replication.factor: 1 28 refresh.groups.interval.seconds: 600 29 sync.group.offsets.enabled: true 30 sync.group.offsets.interval.seconds: 60 31 emit.checkpoints.interval.seconds: 60 32 replication.policy.class: \"org.apache.kafka.connect.mirror.IdentityReplicationPolicy\" topicsPattern: \"topic1|topic2|topic3\" 33 groupsPattern: \"group1|group2|group3\" 34 resources: 35 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi logging: 36 type: inline loggers: connect.root.logger.level: INFO readinessProbe: 37 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 jvmOptions: 38 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" image: my-org/my-image:latest 39 rack: topologyKey: topology.kubernetes.io/zone 40 template: 41 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" connectContainer: 42 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry 43 externalConfiguration: 44 env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: connectCluster: \"my-cluster-target\" clusters: - alias: \"my-cluster-target\" config: group.id: my-connect-cluster 1 offset.storage.topic: my-connect-cluster-offsets 2 config.storage.topic: my-connect-cluster-configs 3 status.storage.topic: my-connect-cluster-status 4 # #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.7.0 # mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: tasksMax: 5 config: producer.override.batch.size: 327680 producer.override.linger.ms: 100 producer.request.timeout.ms: 30000 consumer.fetch.max.bytes: 52428800 # checkpointConnector: config: producer.override.request.timeout.ms: 30000 consumer.max.poll.interval.ms: 300000 # heartbeatConnector: config: producer.override.request.timeout.ms: 30000 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: # mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: tasksMax: 10 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: # mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" checkpointConnector: tasksMax: 10 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-source-cluster spec: kafka: version: 3.7.0 replicas: 1 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls authorization: type: simple config: offsets.topic.replication.factor: 1 transaction.state.log.replication.factor: 1 transaction.state.log.min.isr: 1 default.replication.factor: 1 min.insync.replicas: 1 inter.broker.protocol.version: \"3.7\" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 1 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {}",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-target-cluster spec: kafka: version: 3.7.0 replicas: 1 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls authorization: type: simple config: offsets.topic.replication.factor: 1 transaction.state.log.replication.factor: 1 transaction.state.log.min.isr: 1 default.replication.factor: 1 min.insync.replicas: 1 inter.broker.protocol.version: \"3.7\" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 1 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {}",
"apply -f <kafka_configuration_file> -n <namespace>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-source-user labels: strimzi.io/cluster: my-source-cluster spec: authentication: type: tls authorization: type: simple acls: # MirrorSourceConnector - resource: # Not needed if offset-syncs.topic.location=target type: topic name: mm2-offset-syncs.my-target-cluster.internal operations: - Create - DescribeConfigs - Read - Write - resource: # Needed for every topic which is mirrored type: topic name: \"*\" operations: - DescribeConfigs - Read # MirrorCheckpointConnector - resource: type: cluster operations: - Describe - resource: # Needed for every group for which offsets are synced type: group name: \"*\" operations: - Describe - resource: # Not needed if offset-syncs.topic.location=target type: topic name: mm2-offset-syncs.my-target-cluster.internal operations: - Read",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-target-user labels: strimzi.io/cluster: my-target-cluster spec: authentication: type: tls authorization: type: simple acls: # cluster group - resource: type: group name: mirrormaker2-cluster operations: - Read # access to config.storage.topic - resource: type: topic name: mirrormaker2-cluster-configs operations: - Create - Describe - DescribeConfigs - Read - Write # access to status.storage.topic - resource: type: topic name: mirrormaker2-cluster-status operations: - Create - Describe - DescribeConfigs - Read - Write # access to offset.storage.topic - resource: type: topic name: mirrormaker2-cluster-offsets operations: - Create - Describe - DescribeConfigs - Read - Write # MirrorSourceConnector - resource: # Needed for every topic which is mirrored type: topic name: \"*\" operations: - Create - Alter - AlterConfigs - Write # MirrorCheckpointConnector - resource: type: cluster operations: - Describe - resource: type: topic name: my-source-cluster.checkpoints.internal operations: - Create - Describe - Read - Write - resource: # Needed for every group for which the offset is synced type: group name: \"*\" operations: - Read - Describe # MirrorHeartbeatConnector - resource: type: topic name: heartbeats operations: - Create - Describe - Write",
"apply -f <kafka_user_configuration_file> -n <namespace>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker-2 spec: version: 3.7.0 replicas: 1 connectCluster: \"my-target-cluster\" clusters: - alias: \"my-source-cluster\" bootstrapServers: my-source-cluster-kafka-bootstrap:9093 tls: 1 trustedCertificates: - secretName: my-source-cluster-cluster-ca-cert certificate: ca.crt authentication: 2 type: tls certificateAndKey: secretName: my-source-user certificate: user.crt key: user.key - alias: \"my-target-cluster\" bootstrapServers: my-target-cluster-kafka-bootstrap:9093 tls: 3 trustedCertificates: - secretName: my-target-cluster-cluster-ca-cert certificate: ca.crt authentication: 4 type: tls certificateAndKey: secretName: my-target-user certificate: user.crt key: user.key config: # -1 means it will use the default replication factor configured in the broker config.storage.replication.factor: -1 offset.storage.replication.factor: -1 status.storage.replication.factor: -1 mirrors: - sourceCluster: \"my-source-cluster\" targetCluster: \"my-target-cluster\" sourceConnector: config: replication.factor: 1 offset-syncs.topic.replication.factor: 1 sync.topic.acls.enabled: \"false\" heartbeatConnector: config: heartbeats.topic.replication.factor: 1 checkpointConnector: config: checkpoints.topic.replication.factor: 1 sync.group.offsets.enabled: \"true\" topicsPattern: \"topic1|topic2|topic3\" groupsPattern: \"group1|group2|group3\"",
"apply -f <mirrormaker2_configuration_file> -n <namespace_of_target_cluster>",
"get KafkaMirrorMaker2",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.7.0 replicas: 3 connectCluster: \"my-cluster-target\" clusters: # mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: tasksMax: 10 autoRestart: enabled: true state: stopped #",
"get KafkaMirrorMaker2",
"describe KafkaMirrorMaker2 <mirrormaker_cluster_name>",
"annotate KafkaMirrorMaker2 <mirrormaker_cluster_name> \"strimzi.io/restart-connector=<mirrormaker_connector_name>\"",
"annotate KafkaMirrorMaker2 my-mirror-maker-2 \"strimzi.io/restart-connector=my-connector\"",
"get KafkaMirrorMaker2",
"describe KafkaMirrorMaker2 <mirrormaker_cluster_name>",
"annotate KafkaMirrorMaker2 <mirrormaker_cluster_name> \"strimzi.io/restart-connector-task=<mirrormaker_connector_name>:<task_id>\"",
"annotate KafkaMirrorMaker2 my-mirror-maker-2 \"strimzi.io/restart-connector-task=my-connector:0\"",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: replicas: 3 1 consumer: bootstrapServers: my-source-cluster-kafka-bootstrap:9092 2 groupId: \"my-group\" 3 numStreams: 2 4 offsetCommitInterval: 120000 5 tls: 6 trustedCertificates: - secretName: my-source-cluster-ca-cert certificate: ca.crt authentication: 7 type: tls certificateAndKey: secretName: my-source-secret certificate: public.crt key: private.key config: 8 max.poll.records: 100 receive.buffer.bytes: 32768 producer: bootstrapServers: my-target-cluster-kafka-bootstrap:9092 abortOnSendFailure: false 9 tls: trustedCertificates: - secretName: my-target-cluster-ca-cert certificate: ca.crt authentication: type: tls certificateAndKey: secretName: my-target-secret certificate: public.crt key: private.key config: compression.type: gzip batch.size: 8192 include: \"my-topic|other-topic\" 10 resources: 11 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi logging: 12 type: inline loggers: mirrormaker.root.logger: INFO readinessProbe: 13 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metricsConfig: 14 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key jvmOptions: 15 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" image: my-org/my-image:latest 16 template: 17 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" mirrorMakerContainer: 18 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: 19 type: opentelemetry",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: replicas: 3 1 bootstrapServers: <cluster_name> -cluster-kafka-bootstrap:9092 2 tls: 3 trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt authentication: 4 type: tls certificateAndKey: secretName: my-secret certificate: public.crt key: private.key http: 5 port: 8080 cors: 6 allowedOrigins: \"https://strimzi.io\" allowedMethods: \"GET,POST,PUT,DELETE,OPTIONS,PATCH\" consumer: 7 config: auto.offset.reset: earliest producer: 8 config: delivery.timeout.ms: 300000 resources: 9 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi logging: 10 type: inline loggers: logger.bridge.level: INFO # enabling DEBUG just for send operation logger.send.name: \"http.openapi.operation.send\" logger.send.level: DEBUG jvmOptions: 11 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" readinessProbe: 12 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 image: my-org/my-image:latest 13 template: 14 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" bridgeContainer: 15 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry 16",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: storage: type: ephemeral # zookeeper: storage: type: ephemeral #",
"/var/lib/kafka/data/kafka-log IDX",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false - id: 2 type: persistent-claim size: 100Gi deleteClaim: false # zookeeper: storage: type: persistent-claim size: 1000Gi #",
"storage: type: persistent-claim size: 500Gi class: my-storage-class",
"storage: type: persistent-claim size: 1Gi selector: hdd-type: ssd deleteClaim: true",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: # kafka: replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c # # zookeeper: replicas: 3 storage: deleteClaim: true size: 100Gi type: persistent-claim class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c #",
"/var/lib/kafka/data/kafka-log IDX",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # storage: type: persistent-claim size: 2000Gi class: my-storage-class # zookeeper: #",
"apply -f <kafka_configuration_file>",
"get pv",
"NAME CAPACITY CLAIM pvc-0ca459ce-... 2000Gi my-project/data-my-cluster-kafka-2 pvc-6e1810be-... 2000Gi my-project/data-my-cluster-kafka-0 pvc-82dc78c9-... 2000Gi my-project/data-my-cluster-kafka-1",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false #",
"/var/lib/kafka/data- id /kafka-log idx",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false - id: 2 type: persistent-claim size: 100Gi deleteClaim: false # zookeeper: #",
"apply -f <kafka_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # zookeeper: #",
"apply -f <kafka_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: tieredStorage: type: custom 1 remoteStorageManager: 2 className: com.example.kafka.tiered.storage.s3.S3RemoteStorageManager classPath: /opt/kafka/plugins/tiered-storage-s3/* config: storage.bucket.name: my-bucket 3 # config: rlmm.config.remote.log.metadata.topic.replication.factor: 1 4 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/name operator: In values: - CLUSTER-NAME -kafka topologyKey: \"kubernetes.io/hostname\" # zookeeper: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/name operator: In values: - CLUSTER-NAME -zookeeper topologyKey: \"kubernetes.io/hostname\" #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/cluster operator: In values: - CLUSTER-NAME topologyKey: \"kubernetes.io/hostname\" # zookeeper: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/cluster operator: In values: - CLUSTER-NAME topologyKey: \"kubernetes.io/hostname\" #",
"apply -f <kafka_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" # zookeeper: #",
"apply -f <kafka_configuration_file>",
"label node NAME-OF-NODE node-type=fast-network",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-type operator: In values: - fast-network # zookeeper: #",
"apply -f <kafka_configuration_file>",
"adm taint node NAME-OF-NODE dedicated=Kafka:NoSchedule",
"label node NAME-OF-NODE dedicated=Kafka",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: tolerations: - key: \"dedicated\" operator: \"Equal\" value: \"Kafka\" effect: \"NoSchedule\" affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: dedicated operator: In values: - Kafka # zookeeper: #",
"apply -f <kafka_configuration_file>",
"logging: type: inline loggers: kafka.root.logger.level: INFO",
"logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: my-config-map-key",
"kind: ConfigMap apiVersion: v1 metadata: name: logging-configmap data: log4j.properties: kafka.root.logger.level=\"INFO\"",
"create configmap logging-configmap --from-file=log4j.properties",
"Define the logger kafka.root.logger.level=\"INFO\"",
"logging: type: external valueFrom: configMapKeyRef: name: logging-configmap key: log4j.properties",
"apply -f <kafka_configuration_file>",
"create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml",
"edit configmap strimzi-cluster-operator",
"rootLogger.level=\"INFO\" appender.console.filter.filter1.type=MarkerFilter 1 appender.console.filter.filter1.onMatch=ACCEPT 2 appender.console.filter.filter1.onMismatch=DENY 3 appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster) 4",
"appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster-1) appender.console.filter.filter2.type=MarkerFilter appender.console.filter.filter2.onMatch=ACCEPT appender.console.filter.filter2.onMismatch=DENY appender.console.filter.filter2.marker=Kafka(my-namespace/my-kafka-cluster-2)",
"kind: ConfigMap apiVersion: v1 metadata: name: strimzi-cluster-operator data: log4j2.properties: # appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster)",
"edit configmap strimzi-cluster-operator",
"create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml",
"kind: ConfigMap apiVersion: v1 metadata: name: logging-configmap data: log4j2.properties: rootLogger.level=\"INFO\" appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=KafkaTopic(my-namespace/my-topic)",
"create configmap logging-configmap --from-file=log4j2.properties",
"Define the logger rootLogger.level=\"INFO\" Set the filters appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=KafkaTopic(my-namespace/my-topic)",
"spec: # entityOperator: topicOperator: logging: type: external valueFrom: configMapKeyRef: name: logging-configmap key: log4j2.properties",
"create -f install/cluster-operator -n my-cluster-operator-namespace",
"logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: my-config-map-key",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # externalConfiguration: env: - name: MY_ENVIRONMENT_VARIABLE valueFrom: configMapKeyRef: name: my-config-map key: my-key",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: # config.providers: env config.providers.env.class: org.apache.kafka.common.config.provider.EnvVarConfigProvider #",
"apiVersion: v1 kind: ConfigMap metadata: name: my-connector-configuration data: option1: value1 option2: value2",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: # config.providers: secrets,configmaps 1 config.providers.configmaps.class: io.strimzi.kafka.KubernetesConfigMapConfigProvider 2 config.providers.secrets.class: io.strimzi.kafka.KubernetesSecretConfigProvider 3 #",
"apply -f <kafka_connect_configuration_file>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: connector-configuration-role rules: - apiGroups: [\"\"] resources: [\"configmaps\"] resourceNames: [\"my-connector-configuration\"] verbs: [\"get\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: connector-configuration-role-binding subjects: - kind: ServiceAccount name: my-connect-connect namespace: my-project roleRef: kind: Role name: connector-configuration-role apiGroup: rbac.authorization.k8s.io",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # config: option: USD{configmaps:my-project/my-connector-configuration:option1} #",
"apiVersion: v1 kind: Secret metadata: name: aws-creds type: Opaque data: awsAccessKey: QUtJQVhYWFhYWFhYWFhYWFg= awsSecretAccessKey: Ylhsd1lYTnpkMjl5WkE=",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: # config.providers: env 1 config.providers.env.class: org.apache.kafka.common.config.provider.EnvVarConfigProvider 2 # externalConfiguration: env: - name: AWS_ACCESS_KEY_ID 3 valueFrom: secretKeyRef: name: aws-creds 4 key: awsAccessKey 5 - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey #",
"apply -f <kafka_connect_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # config: option: USD{env:AWS_ACCESS_KEY_ID} option: USD{env:AWS_SECRET_ACCESS_KEY} #",
"apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque stringData: connector.properties: |- 1 dbUsername: my-username 2 dbPassword: my-password",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # config: config.providers: file 1 config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider 2 # externalConfiguration: volumes: - name: connector-config 3 secret: secretName: mysecret 4",
"apply -f <kafka_connect_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 2 config: database.hostname: 192.168.99.1 database.port: \"3306\" database.user: \"USD{file:/opt/kafka/external-configuration/connector-config/mysecret:dbUsername}\" database.password: \"USD{file:/opt/kafka/external-configuration/connector-config/mysecret:dbPassword}\" database.server.id: \"184054\" #",
"apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # config: config.providers: directory 1 config.providers.directory.class: org.apache.kafka.common.config.provider.DirectoryConfigProvider 2 # externalConfiguration: volumes: 3 - name: cluster-ca 4 secret: secretName: my-cluster-cluster-ca-cert 5 - name: my-user secret: secretName: my-user 6",
"apply -f <kafka_connect_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 2 config: # database.history.producer.security.protocol: SSL database.history.producer.ssl.truststore.type: PEM database.history.producer.ssl.truststore.certificates: \"USD{directory:/opt/kafka/external-configuration/cluster-ca:ca.crt}\" database.history.producer.ssl.keystore.type: PEM database.history.producer.ssl.keystore.certificate.chain: \"USD{directory:/opt/kafka/external-configuration/my-user:user.crt}\" database.history.producer.ssl.keystore.key: \"USD{directory:/opt/kafka/external-configuration/my-user:user.key}\" #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster labels: app: my-cluster spec: kafka: # template: pod: metadata: labels: mylabel: myvalue #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # template: pod: terminationGracePeriodSeconds: 120 # #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: topic-name-1 labels: strimzi.io/cluster: my-cluster spec: topicName: topic-name-1",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic-1 1 spec: topicName: My.Topic.1 2 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic---c55e57fe2546a33f9e603caf57165db4072e827e #",
"run kafka-admin -ti --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic-1 labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2",
"apply -f <topic_config_file>",
"get kafkatopics -o wide -w -n <namespace>",
"NAME CLUSTER PARTITIONS REPLICATION FACTOR READY my-topic-1 my-cluster 10 3 True my-topic-2 my-cluster 10 3 my-topic-3 my-cluster 10 3 True",
"get kafkatopics my-topic-2 -o yaml",
"status: conditions: - lastTransitionTime: \"2022-06-13T10:14:43.351550Z\" message: Number of partitions cannot be decreased reason: PartitionDecreaseException status: \"True\" type: NotReady",
"get kafkatopics my-topic-2 -o wide -w -n <namespace>",
"NAME CLUSTER PARTITIONS REPLICATION FACTOR READY my-topic-2 my-cluster 10 3 True",
"get kafkatopics my-topic-2 -o yaml",
"status: conditions: - lastTransitionTime: '2022-06-13T10:15:03.761084Z' status: 'True' type: Ready",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 10 1 replicas: 3 2 config: min.insync.replicas: 2 3 #",
"annotate kafkatopic my-topic-1 strimzi.io/managed=\"false\"",
"get kafkatopics my-topic-1 -o yaml",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: generation: 124 name: my-topic-1 finalizer: strimzi.io/topic-operator labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2 status: observedGeneration: 124 1 topicName: my-topic-1 conditions: - type: Ready status: True lastTransitionTime: 20230301T103000Z",
"delete kafkatopic <kafka_topic_name>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic-1 labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2",
"apply -f <topic_configuration_file>",
"get kafkatopics my-topic-1 -o yaml",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: generation: 1 name: my-topic-1 labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2 status: observedGeneration: 1 1 topicName: my-topic-1 conditions: - type: Ready status: True lastTransitionTime: 20230301T103000Z",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: generation: 1 name: my-topic-1 finalizers: - strimzi.io/topic-operator labels: strimzi.io/cluster: my-cluster",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: generation: 1 name: my-topic-1 finalizers: - strimzi.io/topic-operator labels: strimzi.io/cluster: my-cluster deletionTimestamp: 20230301T000000.000",
"delete USD(oc get kt -n <namespace_name> -o name | grep strimzi-store-topic) && oc delete USD(oc get kt -n <namespace_name> -o name | grep strimzi-topic-operator)",
"annotate USD(oc get kt -n <namespace_name> -o name | grep consumer-offsets) strimzi.io/managed=\"false\" && oc annotate USD(oc get kt -n <namespace_name> -o name | grep transaction-state) strimzi.io/managed=\"false\"",
"delete USD(oc get kt -n <namespace_name> -o name | grep consumer-offsets) && oc delete USD(oc get kt -n <namespace_name> -o name | grep transaction-state)",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: topicOperator: {} userOperator: {} template: topicOperatorContainer: env: - name: STRIMZI_USE_FINALIZERS value: \"false\"",
"apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-topic-operator spec: template: spec: containers: - name: STRIMZI_USE_FINALIZERS value: \"false\"",
"get kt -o=json | jq '.items[].metadata.finalizers = null' | oc apply -f -",
"get kt <topic_name> -o=json | jq '.metadata.finalizers = null' | oc apply -f -",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user-1 labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls authorization: type: simple acls: # Example consumer Acls for topic my-topic using consumer group my-group - resource: type: topic name: my-topic patternType: literal operations: - Describe - Read host: \"*\" - resource: type: group name: my-group patternType: literal operations: - Read host: \"*\" # Example Producer Acls for topic my-topic - resource: type: topic name: my-topic patternType: literal operations: - Create - Describe - Write host: \"*\"",
"apply -f <user_config_file>",
"get kafkausers -o wide -w -n <namespace>",
"NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-1 my-cluster tls simple True my-user-2 my-cluster tls simple my-user-3 my-cluster tls simple True",
"get kafkausers my-user-2 -o yaml",
"status: conditions: - lastTransitionTime: \"2022-06-10T10:07:37.238065Z\" message: Simple authorization ACL rules are configured but not supported in the Kafka cluster configuration. reason: InvalidResourceException status: \"True\" type: NotReady",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # authorization: type: simple",
"get kafkausers my-user-2 -o wide -w -n <namespace>",
"NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-2 my-cluster tls simple True",
"get kafkausers my-user-2 -o yaml",
"status: conditions: - lastTransitionTime: \"2022-06-10T10:33:40.166846Z\" status: \"True\" type: Ready",
"run kafka-producer -ti --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server cluster-name -kafka-bootstrap:9092 --topic my-topic",
"run kafka-consumer -ti --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server cluster-name -kafka-bootstrap:9092 --topic my-topic --from-beginning",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # listeners: - name: plain port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-certificate.crt key: my-key.key #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # listeners: 1 - name: external1 2 port: 9094 3 type: <listener_type> 4 tls: true 5 authentication: type: tls 6 configuration: 7 # authorization: 8 type: simple superUsers: - super-user-name 9 #",
"apply -f <kafka_configuration_file>",
"get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name==\" <listener_name> \")].bootstrapServers}{\"\\n\"}'",
"get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"external\")].bootstrapServers}{\"\\n\"}'",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster 1 spec: authentication: type: tls 2 authorization: type: simple acls: 3 - resource: type: topic name: my-topic patternType: literal operations: - Describe - Read - resource: type: group name: my-group patternType: literal operations: - Read",
"apply -f USER-CONFIG-FILE",
"apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store",
"get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt",
"get secret <user_name> -o jsonpath='{.data.user\\.crt}' | base64 -d > user.crt",
"get secret <user_name> -o jsonpath='{.data.user\\.key}' | base64 -d > user.key",
"props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, \" <hostname>:<port> \");",
"props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, \"SSL\"); props.put(SslConfigs.SSL_TRUSTSTORE_TYPE_CONFIG, \"PEM\"); props.put(SslConfigs.SSL_TRUSTSTORE_CERTIFICATES_CONFIG, \" <ca.crt_file_content> \");",
"props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, \"SSL\"); props.put(SslConfigs.SSL_KEYSTORE_TYPE_CONFIG, \"PEM\"); props.put(SslConfigs.SSL_KEYSTORE_CERTIFICATE_CHAIN_CONFIG, \" <user.crt_file_content> \"); props.put(SslConfigs.SSL_KEYSTORE_KEY_CONFIG, \" <user.key_file_content> \");",
"props.put(SslConfigs.SSL_KEYSTORE_CERTIFICATE_CHAIN_CONFIG, \"-----BEGIN CERTIFICATE----- \\n <user_certificate_content_line_1> \\n <user_certificate_content_line_n> \\n-----END CERTIFICATE---\"); props.put(SslConfigs.SSL_KEYSTORE_KEY_CONFIG, \"----BEGIN PRIVATE KEY-----\\n <user_key_content_line_1> \\n <user_key_content_line_n> \\n-----END PRIVATE KEY-----\");",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # listeners: - name: external4 port: 9094 type: nodeport tls: true authentication: type: tls # # zookeeper: #",
"apply -f <kafka_configuration_file>",
"NAME TYPE CLUSTER-IP PORT(S) my-cluster-kafka-external4-0 NodePort 172.30.55.13 9094:31789/TCP my-cluster-kafka-external4-1 NodePort 172.30.250.248 9094:30028/TCP my-cluster-kafka-external4-2 NodePort 172.30.115.81 9094:32650/TCP my-cluster-kafka-external4-bootstrap NodePort 172.30.30.23 9094:32650/TCP",
"status: clusterId: Y_RJQDGKRXmNF7fEcWldJQ conditions: - lastTransitionTime: '2023-01-31T14:59:37.113630Z' status: 'True' type: Ready kafkaVersion: 3.7.0 listeners: # - addresses: - host: ip-10-0-224-199.us-west-2.compute.internal port: 32650 bootstrapServers: 'ip-10-0-224-199.us-west-2.compute.internal:32650' certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external4 observedGeneration: 2 operatorLastSuccessfulVersion: 2.7 #",
"get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"external4\")].bootstrapServers}{\"\\n\"}' ip-10-0-224-199.us-west-2.compute.internal:32650",
"get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # listeners: - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: tls # # zookeeper: #",
"apply -f <kafka_configuration_file>",
"NAME TYPE CLUSTER-IP PORT(S) my-cluster-kafka-external3-0 LoadBalancer 172.30.204.234 9094:30011/TCP my-cluster-kafka-external3-1 LoadBalancer 172.30.164.89 9094:32544/TCP my-cluster-kafka-external3-2 LoadBalancer 172.30.73.151 9094:32504/TCP my-cluster-kafka-external3-bootstrap LoadBalancer 172.30.30.228 9094:30371/TCP NAME EXTERNAL-IP (loadbalancer) my-cluster-kafka-external3-0 a8a519e464b924000b6c0f0a05e19f0d-1132975133.us-west-2.elb.amazonaws.com my-cluster-kafka-external3-1 ab6adc22b556343afb0db5ea05d07347-611832211.us-west-2.elb.amazonaws.com my-cluster-kafka-external3-2 a9173e8ccb1914778aeb17eca98713c0-777597560.us-west-2.elb.amazonaws.com my-cluster-kafka-external3-bootstrap a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com",
"status: clusterId: Y_RJQDGKRXmNF7fEcWldJQ conditions: - lastTransitionTime: '2023-01-31T14:59:37.113630Z' status: 'True' type: Ready kafkaVersion: 3.7.0 listeners: # - addresses: - host: >- a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com port: 9094 bootstrapServers: >- a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com:9094 certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external3 observedGeneration: 2 operatorLastSuccessfulVersion: 2.7 #",
"status: loadBalancer: ingress: - hostname: >- a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com #",
"get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"external3\")].bootstrapServers}{\"\\n\"}' a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com:9094",
"get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # listeners: - name: external1 port: 9094 type: route tls: true 1 authentication: type: tls # # zookeeper: #",
"apply -f <kafka_configuration_file>",
"NAME HOST/PORT SERVICES PORT TERMINATION my-cluster-kafka-external1-0 my-cluster-kafka-external1-0-my-project.router.com my-cluster-kafka-external1-0 9094 passthrough my-cluster-kafka-external1-1 my-cluster-kafka-external1-1-my-project.router.com my-cluster-kafka-external1-1 9094 passthrough my-cluster-kafka-external1-2 my-cluster-kafka-external1-2-my-project.router.com my-cluster-kafka-external1-2 9094 passthrough my-cluster-kafka-external1-bootstrap my-cluster-kafka-external1-bootstrap-my-project.router.com my-cluster-kafka-external1-bootstrap 9094 passthrough",
"status: ingress: - host: >- my-cluster-kafka-external1-bootstrap-my-project.router.com #",
"openssl s_client -connect my-cluster-kafka-external1-0-my-project.router.com:443 -servername my-cluster-kafka-external1-0-my-project.router.com -showcerts",
"Certificate chain 0 s:O = io.strimzi, CN = my-cluster-kafka i:O = io.strimzi, CN = cluster-ca v0",
"get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"external1\")].bootstrapServers}{\"\\n\"}' my-cluster-kafka-external1-bootstrap-my-project.router.com:443",
"get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt",
"apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { \"port\" : 9092, \"tls\" : false, \"protocol\" : \"kafka\", \"auth\" : \"scram-sha-512\" }, { \"port\" : 9093, \"tls\" : true, \"protocol\" : \"kafka\", \"auth\" : \"tls\" } ] labels: strimzi.io/cluster: my-cluster strimzi.io/discovery: \"true\" strimzi.io/kind: Kafka strimzi.io/name: my-cluster-kafka-bootstrap name: my-cluster-kafka-bootstrap spec: #",
"apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { \"port\" : 8080, \"tls\" : false, \"auth\" : \"none\", \"protocol\" : \"http\" } ] labels: strimzi.io/cluster: my-bridge strimzi.io/discovery: \"true\" strimzi.io/kind: KafkaBridge strimzi.io/name: my-bridge-bridge-service",
"get service -l strimzi.io/discovery=true",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # listeners: - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: tls",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # authorization: type: simple superUsers: - CN=client_1 - user_2 - CN=client_3 - CN=client_4,OU=my_ou,O=my_org,L=my_location,ST=my_state,C=US - CN=client_5,OU=my_ou,O=my_org,C=GB - CN=client_6,O=my_org #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls #",
"apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store",
"bootstrap.servers= <kafka_cluster_name> -kafka-bootstrap:9093 1 security.protocol=SSL 2 ssl.truststore.location=/tmp/ca.p12 3 ssl.truststore.password= <truststore_password> 4 ssl.keystore.location=/tmp/user.p12 5 ssl.keystore.password= <keystore_password> 6",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls-external #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: scram-sha-512 #",
"apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: password: Z2VuZXJhdGVkcGFzc3dvcmQ= 1 sasl.jaas.config: b3JnLmFwYWNoZS5rYWZrYS5jb21tb24uc2VjdXJpdHkuc2NyYW0uU2NyYW1Mb2dpbk1vZHVsZSByZXF1aXJlZCB1c2VybmFtZT0ibXktdXNlciIgcGFzc3dvcmQ9ImdlbmVyYXRlZHBhc3N3b3JkIjsK 2",
"echo \"Z2VuZXJhdGVkcGFzc3dvcmQ=\" | base64 --decode",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: scram-sha-512 password: valueFrom: secretKeyRef: name: my-secret 1 key: my-password 2 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # quotas: producerByteRate: 1048576 1 consumerByteRate: 2097152 2 requestPercentage: 55 3 controllerMutationRate: 10 4",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # authorization: 1 type: simple superUsers: 2 - CN=client_1 - user_2 - CN=client_3 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls 3 # zookeeper: #",
"apply -f <kafka_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: 1 type: tls authorization: type: simple 2 acls: - resource: type: topic name: my-topic patternType: literal operations: - Describe - Read - resource: type: group name: my-group patternType: literal operations: - Read",
"apply -f <user_config_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls networkPolicyPeers: - podSelector: matchLabels: app: kafka-client # zookeeper: #",
"apply -f your-file",
"create secret generic my-secret --from-file= my-listener-key.key --from-file= my-listener-certificate.crt",
"listeners: - name: plain port: 9092 type: internal tls: false - name: external3 port: 9094 type: loadbalancer tls: true configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key",
"listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key",
"apply -f kafka.yaml",
"//Kafka brokers *. <cluster-name> -kafka-brokers *. <cluster-name> -kafka-brokers. <namespace> .svc // Bootstrap service <cluster-name> -kafka-bootstrap <cluster-name> -kafka-bootstrap. <namespace> .svc",
"// Kafka brokers <cluster-name> -kafka-0. <cluster-name> -kafka-brokers <cluster-name> -kafka-0. <cluster-name> -kafka-brokers. <namespace> .svc <cluster-name> -kafka-1. <cluster-name> -kafka-brokers <cluster-name> -kafka-1. <cluster-name> -kafka-brokers. <namespace> .svc // Bootstrap service <cluster-name> -kafka-bootstrap <cluster-name> -kafka-bootstrap. <namespace> .svc",
"// Kafka brokers <cluster-name> -kafka- <listener-name> -0 <cluster-name> -kafka- <listener-name> -0. <namespace> .svc <cluster-name> -kafka- <listener-name> -1 <cluster-name> -kafka- <listener-name> -1. <namespace> .svc // Bootstrap service <cluster-name> -kafka- <listener-name> -bootstrap <cluster-name> -kafka- <listener-name> -bootstrap. <namespace> .svc",
"authentication: type: oauth # enableOauthBearer: true",
"authentication: type: oauth # enablePlain: true tokenEndpointUri: https:// OAUTH-SERVER-ADDRESS /auth/realms/external/protocol/openid-connect/token",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth #",
"listeners: - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth validIssuerUri: <https://<auth_server_address>/auth/realms/tls> jwksEndpointUri: <https://<auth_server_address>/auth/realms/tls/protocol/openid-connect/certs> userNameClaim: preferred_username maxSecondsWithoutReauthentication: 3600 tlsTrustedCertificates: - secretName: oauth-server-cert certificate: ca.crt #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth clientId: kafka-broker clientSecret: secretName: my-cluster-oauth key: clientSecret validIssuerUri: <https://<auth_server_-_address>/auth/realms/tls> introspectionEndpointUri: <https://<auth_server_address>/auth/realms/tls/protocol/openid-connect/token/introspect> userNameClaim: preferred_username maxSecondsWithoutReauthentication: 3600 tlsTrustedCertificates: - secretName: oauth-server-cert certificate: ca.crt",
"edit kafka my-cluster",
"# - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth 1 validIssuerUri: https://<auth_server_address>/auth/realms/external 2 jwksEndpointUri: https://<auth_server_address>/auth/realms/external/protocol/openid-connect/certs 3 userNameClaim: preferred_username 4 maxSecondsWithoutReauthentication: 3600 5 tlsTrustedCertificates: 6 - secretName: oauth-server-cert certificate: ca.crt disableTlsHostnameVerification: true 7 jwksExpirySeconds: 360 8 jwksRefreshSeconds: 300 9 jwksMinRefreshPauseSeconds: 1 10",
"- name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth validIssuerUri: https://<auth_server_address>/auth/realms/external introspectionEndpointUri: https://<auth_server_address>/auth/realms/external/protocol/openid-connect/token/introspect 1 clientId: kafka-broker 2 clientSecret: 3 secretName: my-cluster-oauth key: clientSecret userNameClaim: preferred_username 4 maxSecondsWithoutReauthentication: 3600 5",
"authentication: type: oauth # checkIssuer: false 1 checkAudience: true 2 fallbackUserNameClaim: client_id 3 fallbackUserNamePrefix: client-account- 4 validTokenType: bearer 5 userInfoEndpointUri: https://<auth_server_address>/auth/realms/external/protocol/openid-connect/userinfo 6 enableOauthBearer: false 7 enablePlain: true 8 tokenEndpointUri: https://<auth_server_address>/auth/realms/external/protocol/openid-connect/token 9 customClaimCheck: \"@.custom == 'custom-value'\" 10 clientAudience: audience 11 clientScope: scope 12 connectTimeoutSeconds: 60 13 readTimeoutSeconds: 60 14 httpRetries: 2 15 httpRetryPauseMs: 300 16 groupsClaim: \"USD.groups\" 17 groupsClaimDelimiter: \",\" 18 includeAcceptHeader: false 19",
"logs -f USD{POD_NAME} -c USD{CONTAINER_NAME} get pod -w",
"<dependency> <groupId>io.strimzi</groupId> <artifactId>kafka-oauth-client</artifactId> <version>0.15.0.redhat-00007</version> </dependency>",
"security.protocol=SASL_SSL 1 sasl.mechanism=OAUTHBEARER 2 ssl.truststore.location=/tmp/truststore.p12 3 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" \\ 4 oauth.client.id=\"<client_id>\" \\ 5 oauth.client.secret=\"<client_secret>\" \\ 6 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" \\ 7 oauth.ssl.truststore.password=\"USDSTOREPASS\" \\ 8 oauth.ssl.truststore.type=\"PKCS12\" \\ 9 oauth.scope=\"<scope>\" \\ 10 oauth.audience=\"<audience>\" ; 11 sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler",
"security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" oauth.client.id=\"<client_id>\" \\ 1 oauth.client.secret=\"<client_secret>\" \\ 2 oauth.password.grant.username=\"<username>\" \\ 3 oauth.password.grant.password=\"<password>\" \\ 4 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.scope=\"<scope>\" oauth.audience=\"<audience>\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler",
"security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" oauth.access.token=\"<access_token>\" \\ 1 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler",
"security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" oauth.client.id=\"<client_id>\" \\ 1 oauth.client.secret=\"<client_secret>\" \\ 2 oauth.refresh.token=\"<refresh_token>\" \\ 3 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler",
"Properties props = new Properties(); try (FileReader reader = new FileReader(\"client.properties\", StandardCharsets.UTF_8)) { props.load(reader); }",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Secret metadata: name: my-bridge-oauth type: Opaque data: clientSecret: MGQ1OTRmMzYtZTllZS00MDY2LWI5OGEtMTM5MzM2NjdlZjQw 1",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # authentication: type: oauth 1 tokenEndpointUri: https://<auth-server-address>/auth/realms/master/protocol/openid-connect/token 2 clientId: kafka-bridge clientSecret: secretName: my-bridge-oauth key: clientSecret tlsTrustedCertificates: 3 - secretName: oauth-server-cert certificate: tls.crt",
"spec: # authentication: # disableTlsHostnameVerification: true 1 checkAccessTokenType: false 2 accessTokenIsJwt: false 3 scope: any 4 audience: kafka 5 connectTimeoutSeconds: 60 6 readTimeoutSeconds: 60 7 httpRetries: 2 8 httpRetryPauseMs: 300 9 includeAcceptHeader: false 10",
"apply -f your-file",
"logs -f USD{POD_NAME} -c USD{CONTAINER_NAME} get pod -w",
"edit kafka my-cluster",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # authorization: type: keycloak 1 tokenEndpointUri: < https://<auth-server-address>/auth/realms/external/protocol/openid-connect/token > 2 clientId: kafka 3 delegateToKafkaAcls: false 4 disableTlsHostnameVerification: false 5 superUsers: 6 - CN=fred - sam - CN=edward tlsTrustedCertificates: 7 - secretName: oauth-server-cert certificate: ca.crt grantsRefreshPeriodSeconds: 60 8 grantsRefreshPoolSize: 5 9 grantsMaxIdleSeconds: 300 10 grantsGcPeriodSeconds: 300 11 grantsAlwaysLatest: false 12 connectTimeoutSeconds: 60 13 readTimeoutSeconds: 60 14 httpRetries: 2 15 enableMetrics: false 16 includeAcceptHeader: false 17 #",
"logs -f USD{POD_NAME} -c kafka get pod -w",
"Topic:my-topic Topic:orders-* Group:orders-* Cluster:*",
"kafka-cluster:my-cluster,Topic:* kafka-cluster:*,Group:b_*",
"bin/kafka-topics.sh --create --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-topics.sh --list --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-topics.sh --describe --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-console-producer.sh --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --producer.config=/tmp/config.properties",
"Topic:my-topic Group:my-group-*",
"bin/kafka-console-consumer.sh --topic my-topic --group my-group-1 --from-beginning --bootstrap-server my-cluster-kafka-bootstrap:9092 --consumer.config /tmp/config.properties",
"Topic:my-topic Cluster:kafka-cluster",
"bin/kafka-console-producer.sh --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --producer.config=/tmp/config.properties --producer-property enable.idempotence=true --request-required-acks -1",
"bin/kafka-consumer-groups.sh --list --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-consumer-groups.sh --describe --group my-group-1 --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-topics.sh --alter --topic my-topic --partitions 2 --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-configs.sh --entity-type brokers --entity-name 0 --describe --all --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-configs --entity-type brokers --entity-name 0 --alter --add-config log.cleaner.threads=2 --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-topics.sh --delete --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-leader-election.sh --topic my-topic --partition 0 --election-type PREFERRED / --bootstrap-server my-cluster-kafka-bootstrap:9092 --admin.config /tmp/config.properties",
"bin/kafka-reassign-partitions.sh --topics-to-move-json-file /tmp/topics-to-move.json --broker-list \"0,1\" --generate --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties > /tmp/partition-reassignment.json",
"bin/kafka-reassign-partitions.sh --reassignment-json-file /tmp/partition-reassignment.json --execute --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties",
"bin/kafka-reassign-partitions.sh --reassignment-json-file /tmp/partition-reassignment.json --verify --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties",
"NS=sso get ingress keycloak -n USDNS",
"get -n USDNS pod keycloak-0 -o yaml | less",
"SECRET_NAME=credential-keycloak get -n USDNS secret USDSECRET_NAME -o yaml | grep PASSWORD | awk '{print USD2}' | base64 -D",
"Dev Team A can write to topics that start with x_ on any cluster Dev Team B can read from topics that start with x_ on any cluster Dev Team B can update consumer group offsets that start with x_ on any cluster ClusterManager of my-cluster Group has full access to cluster config on my-cluster ClusterManager of my-cluster Group has full access to consumer groups on my-cluster ClusterManager of my-cluster Group has full access to topics on my-cluster",
"SSO_HOST= SSO-HOSTNAME SSO_HOST_PORT=USDSSO_HOST:443 STOREPASS=storepass echo \"Q\" | openssl s_client -showcerts -connect USDSSO_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/sso.pem",
"split -p \"-----BEGIN CERTIFICATE-----\" sso.pem sso- for f in USD(ls sso-*); do mv USDf USDf.pem; done cp USD(ls sso-* | sort -r | head -n 1) sso-ca.crt",
"create secret generic oauth-server-cert --from-file=/tmp/sso-ca.crt -n USDNS",
"SSO_HOST= SSO-HOSTNAME",
"cat examples/security/keycloak-authorization/kafka-ephemeral-oauth-single-keycloak-authz.yaml | sed -E 's#\\USD{SSO_HOST}'\"#USDSSO_HOST#\" | oc create -n USDNS -f -",
"NS=sso run -ti --restart=Never --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 kafka-cli -n USDNS -- /bin/sh",
"attach -ti kafka-cli -n USDNS",
"SSO_HOST= SSO-HOSTNAME SSO_HOST_PORT=USDSSO_HOST:443 STOREPASS=storepass echo \"Q\" | openssl s_client -showcerts -connect USDSSO_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/sso.pem",
"split -p \"-----BEGIN CERTIFICATE-----\" sso.pem sso- for f in USD(ls sso-*); do mv USDf USDf.pem; done cp USD(ls sso-* | sort -r | head -n 1) sso-ca.crt",
"keytool -keystore /tmp/truststore.p12 -storetype pkcs12 -alias sso -storepass USDSTOREPASS -import -file /tmp/sso-ca.crt -noprompt",
"KAFKA_HOST_PORT=my-cluster-kafka-bootstrap:9093 STOREPASS=storepass echo \"Q\" | openssl s_client -showcerts -connect USDKAFKA_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/my-cluster-kafka.pem",
"split -p \"-----BEGIN CERTIFICATE-----\" /tmp/my-cluster-kafka.pem kafka- for f in USD(ls kafka-*); do mv USDf USDf.pem; done cp USD(ls kafka-* | sort -r | head -n 1) my-cluster-kafka-ca.crt",
"keytool -keystore /tmp/truststore.p12 -storetype pkcs12 -alias my-cluster-kafka -storepass USDSTOREPASS -import -file /tmp/my-cluster-kafka-ca.crt -noprompt",
"SSO_HOST= SSO-HOSTNAME cat > /tmp/team-a-client.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.client.id=\"team-a-client\" oauth.client.secret=\"team-a-client-secret\" oauth.ssl.truststore.location=\"/tmp/truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.token.endpoint.uri=\"https://USDSSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF",
"cat > /tmp/team-b-client.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.client.id=\"team-b-client\" oauth.client.secret=\"team-b-client-secret\" oauth.ssl.truststore.location=\"/tmp/truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.token.endpoint.uri=\"https://USDSSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF",
"USERNAME=alice PASSWORD=alice-password GRANT_RESPONSE=USD(curl -X POST \"https://USDSSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token\" -H 'Content-Type: application/x-www-form-urlencoded' -d \"grant_type=password&username=USDUSERNAME&password=USDPASSWORD&client_id=kafka-cli&scope=offline_access\" -s -k) REFRESH_TOKEN=USD(echo USDGRANT_RESPONSE | awk -F \"refresh_token\\\":\\\"\" '{printf USD2}' | awk -F \"\\\"\" '{printf USD1}')",
"cat > /tmp/alice.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.refresh.token=\"USDREFRESH_TOKEN\" oauth.client.id=\"kafka-cli\" oauth.ssl.truststore.location=\"/tmp/truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.token.endpoint.uri=\"https://USDSSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF",
"bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic my-topic --producer.config=/tmp/team-a-client.properties First message",
"bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages --producer.config /tmp/team-a-client.properties First message Second message",
"logs my-cluster-kafka-0 -f -n USDNS",
"bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages --from-beginning --consumer.config /tmp/team-a-client.properties",
"bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages --from-beginning --consumer.config /tmp/team-a-client.properties --group a_consumer_group_1",
"bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list",
"bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list",
"bin/kafka-configs.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --entity-type brokers --describe --entity-default",
"bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages --producer.config /tmp/team-b-client.properties Message 1",
"bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic b_messages --producer.config /tmp/team-b-client.properties Message 1 Message 2 Message 3",
"bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --producer.config /tmp/team-b-client.properties Message 1",
"bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --producer.config /tmp/team-a-client.properties Message 1",
"bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties --topic x_messages --create --replication-factor 1 --partitions 1",
"bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties --list bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-b-client.properties --list",
"bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --producer.config /tmp/team-a-client.properties Message 1 Message 2 Message 3",
"bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --producer.config /tmp/team-b-client.properties Message 4 Message 5",
"bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --from-beginning --consumer.config /tmp/team-b-client.properties --group x_consumer_group_b",
"bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --from-beginning --consumer.config /tmp/team-a-client.properties --group x_consumer_group_a",
"bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --from-beginning --consumer.config /tmp/team-a-client.properties --group a_consumer_group_a",
"bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --from-beginning --consumer.config /tmp/alice.properties",
"bin/kafka-configs.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties --entity-type brokers --describe --entity-default",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # template: clusterCaCert: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: clusterCa: generateSecretOwnerReference: false clientsCa: generateSecretOwnerReference: false",
"Not Before Not After | | |<--------------- validityDays --------------->| <--- renewalDays --->|",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: clusterCa: renewalDays: 30 validityDays: 365 generateCertificateAuthority: true clientsCa: renewalDays: 30 validityDays: 365 generateCertificateAuthority: true",
"annotate secret my-cluster-cluster-ca-cert -n my-project strimzi.io/force-renew=\"true\"",
"annotate secret my-cluster-clients-ca-cert -n my-project strimzi.io/force-renew=\"true\"",
"get secret my-cluster-cluster-ca-cert -n my-project -o=jsonpath='{.data.ca\\.crt}' | base64 -d | openssl x509 -noout -dates",
"get secret my-cluster-clients-ca-cert -n my-project -o=jsonpath='{.data.ca\\.crt}' | base64 -d | openssl x509 -noout -dates",
"delete secret my-cluster-cluster-ca-cert -n my-project",
"delete secret my-cluster-clients-ca-cert -n my-project",
"get secret my-cluster-cluster-ca-cert -n my-project -o=jsonpath='{.data.ca\\.crt}' | base64 -d | openssl x509 -noout -dates",
"get secret my-cluster-clients-ca-cert -n my-project -o=jsonpath='{.data.ca\\.crt}' | base64 -d | openssl x509 -noout -dates",
"get <resource_type> --all-namespaces | grep <kafka_cluster_name>",
"kind: Pod apiVersion: v1 metadata: name: client-pod spec: containers: - name: client-name image: client-name volumeMounts: - name: secret-volume mountPath: /data/p12 env: - name: SECRET_PASSWORD valueFrom: secretKeyRef: name: my-secret key: my-password volumes: - name: secret-volume secret: secretName: my-cluster-cluster-ca-cert",
"kind: Pod apiVersion: v1 metadata: name: client-pod spec: containers: - name: client-name image: client-name volumeMounts: - name: secret-volume mountPath: /data/crt volumes: - name: secret-volume secret: secretName: my-cluster-cluster-ca-cert",
"get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.p12}' | base64 -d > ca.p12",
"get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.password}' | base64 -d > ca.password",
"get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt",
"openssl pkcs12 -export -in ca.crt -nokeys -out ca.p12 -password pass:<P12_password> -caname ca.crt",
"create secret generic <cluster_name>-clients-ca-cert --from-file=ca.crt=ca.crt",
"create secret generic <cluster_name>-cluster-ca-cert --from-file=ca.crt=ca.crt --from-file=ca.p12=ca.p12 --from-literal=ca.password= P12-PASSWORD",
"create secret generic <ca_key_secret> --from-file=ca.key=ca.key",
"label secret <ca_certificate_secret> strimzi.io/kind=Kafka strimzi.io/cluster=\"<cluster_name>\"",
"label secret <ca_key_secret> strimzi.io/kind=Kafka strimzi.io/cluster=\"<cluster_name>\"",
"annotate secret <ca_certificate_secret> strimzi.io/ca-cert-generation=\"<ca_certificate_generation>\"",
"annotate secret <ca_key_secret> strimzi.io/ca-key-generation=\"<ca_key_generation>\"",
"kind: Kafka version: kafka.strimzi.io/v1beta2 spec: # clusterCa: generateCertificateAuthority: false",
"edit secret <ca_certificate_secret_name>",
"apiVersion: v1 kind: Secret data: ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0F... 1 metadata: annotations: strimzi.io/ca-cert-generation: \"0\" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert # type: Opaque",
"cat <path_to_new_certificate> | base64",
"apiVersion: v1 kind: Secret data: ca.crt: GCa6LS3RTHeKFiFDGBOUDYFAZ0F... 1 metadata: annotations: strimzi.io/ca-cert-generation: \"1\" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert # type: Opaque",
"annotate Kafka <name_of_custom_resource> strimzi.io/pause-reconciliation=\"true\"",
"annotate Kafka my-cluster strimzi.io/pause-reconciliation=\"true\"",
"describe Kafka <name_of_custom_resource>",
"edit Kafka <name_of_custom_resource>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: clusterCa: generateCertificateAuthority: false 1 clientsCa: generateCertificateAuthority: false 2",
"edit secret <ca_certificate_secret_name>",
"apiVersion: v1 kind: Secret data: ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0F... 1 metadata: annotations: strimzi.io/ca-cert-generation: \"0\" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert # type: Opaque",
"cat <path_to_new_certificate> | base64",
"apiVersion: v1 kind: Secret data: ca.crt: GCa6LS3RTHeKFiFDGBOUDYFAZ0F... 1 ca-2023-01-26T17-32-00Z.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0F... 2 metadata: annotations: strimzi.io/ca-cert-generation: \"1\" 3 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert # type: Opaque",
"edit secret <ca_key_name>",
"apiVersion: v1 kind: Secret data: ca.key: SA1cKF1GFDzOIiPOIUQBHDNFGDFS... 1 metadata: annotations: strimzi.io/ca-key-generation: \"0\" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca # type: Opaque",
"cat <path_to_new_key> | base64",
"apiVersion: v1 kind: Secret data: ca.key: AB0cKF1GFDzOIiPOIUQWERZJQ0F... 1 metadata: annotations: strimzi.io/ca-key-generation: \"1\" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca # type: Opaque",
"annotate --overwrite Kafka <name_of_custom_resource> strimzi.io/pause-reconciliation=\"false\"",
"annotate Kafka <name_of_custom_resource> strimzi.io/pause-reconciliation-",
"edit secret <ca_certificate_secret_name>",
"apiVersion: v1 kind: Secret data: ca.crt: GCa6LS3RTHeKFiFDGBOUDYFAZ0F metadata: annotations: strimzi.io/ca-cert-generation: \"1\" labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert # type: Opaque",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 # config: # default.replication.factor: 3 min.insync.replicas: 2 #",
"annotate Kafka my-kafka-cluster strimzi.io/skip-broker-scaledown-check=\"true\"",
"annotate Kafka my-kafka-cluster strimzi.io/skip-broker-scaledown-check-",
"RackAwareGoal; ReplicaCapacityGoal; DiskCapacityGoal; NetworkInboundCapacityGoal; NetworkOutboundCapacityGoal; CpuCapacityGoal",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: topicOperator: {} userOperator: {} cruiseControl: brokerCapacity: inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s config: # Note that `default.goals` (superset) must also include all `hard.goals` (subset) default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal #",
"RackAwareGoal; MinTopicLeadersPerBrokerGoal; ReplicaCapacityGoal; DiskCapacityGoal; NetworkInboundCapacityGoal; NetworkOutboundCapacityGoal; CpuCapacityGoal; ReplicaDistributionGoal; PotentialNwOutGoal; DiskUsageDistributionGoal; NetworkInboundUsageDistributionGoal; NetworkOutboundUsageDistributionGoal; CpuUsageDistributionGoal; TopicReplicaDistributionGoal; LeaderReplicaDistributionGoal; LeaderBytesInDistributionGoal; PreferredLeaderElectionGoal",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: topicOperator: {} userOperator: {} cruiseControl: brokerCapacity: inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s config: # Note that `default.goals` (superset) must also include all `hard.goals` (subset) default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal #",
"KafkaRebalance.spec.goals",
"describe kafkarebalance <kafka_rebalance_resource_name> -n <namespace>",
"get kafkarebalance -o json | jq <jq_query> .",
"Name: my-rebalance Namespace: myproject Labels: strimzi.io/cluster=my-cluster Annotations: API Version: kafka.strimzi.io/v1alpha1 Kind: KafkaRebalance Metadata: Status: Conditions: Last Transition Time: 2022-04-05T14:36:11.900Z Status: ProposalReady Type: State Observed Generation: 1 Optimization Result: Data To Move MB: 0 Excluded Brokers For Leadership: Excluded Brokers For Replica Move: Excluded Topics: Intra Broker Data To Move MB: 12 Monitored Partitions Percentage: 100 Num Intra Broker Replica Movements: 0 Num Leader Movements: 24 Num Replica Movements: 55 On Demand Balancedness Score After: 82.91290759174306 On Demand Balancedness Score Before: 78.01176356230222 Recent Windows: 5 Session Id: a4f833bd-2055-4213-bfdd-ad21f95bf184",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster annotations: strimzi.io/rebalance-auto-approval: \"true\" spec: mode: # any mode #",
"describe configmaps <my_rebalance_configmap_name> -n <namespace>",
"get configmaps <my_rebalance_configmap_name> -o json | jq '.[\"data\"][\"brokerLoad.json\"]|fromjson|.'",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: brokerCapacity: 1 inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s overrides: 2 - brokers: [0] inboundNetwork: 20000KiB/s outboundNetwork: 20000KiB/s - brokers: [1, 2] inboundNetwork: 30000KiB/s outboundNetwork: 30000KiB/s # config: 3 # Note that `default.goals` (superset) must also include all `hard.goals` (subset) default.goals: > 4 com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal # hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal # cpu.balance.threshold: 1.1 metadata.max.age.ms: 300000 send.buffer.bytes: 131072 webserver.http.cors.enabled: true 5 webserver.http.cors.origin: \"*\" webserver.http.cors.exposeheaders: \"User-Task-ID,Content-Type\" # resources: 6 requests: cpu: 1 memory: 512Mi limits: cpu: 2 memory: 2Gi logging: 7 type: inline loggers: rootLogger.level: INFO template: 8 pod: metadata: labels: label1: value1 securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120 readinessProbe: 9 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metricsConfig: 10 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: cruise-control-metrics key: metrics-config.yml",
"apply -f <kafka_configuration_file>",
"get deployments -n <my_cluster_operator_namespace>",
"NAME READY UP-TO-DATE AVAILABLE my-cluster-cruise-control 1/1 1 1",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: {}",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: mode: full",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: mode: add-brokers brokers: [3, 4] 1",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: mode: remove-brokers brokers: [3, 4] 1",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: goals: - RackAwareGoal - ReplicaCapacityGoal",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: goals: - RackAwareGoal - ReplicaCapacityGoal skipHardGoalCheck: true",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster annotations: strimzi.io/rebalance-auto-approval: \"true\" spec: goals: - RackAwareGoal - ReplicaCapacityGoal skipHardGoalCheck: true",
"apply -f <kafka_rebalance_configuration_file>",
"get kafkarebalance -o wide -w -n <namespace>",
"describe kafkarebalance <kafka_rebalance_resource_name>",
"Status: Conditions: Last Transition Time: 2020-05-19T13:50:12.533Z Status: ProposalReady Type: State Observed Generation: 1 Optimization Result: Data To Move MB: 0 Excluded Brokers For Leadership: Excluded Brokers For Replica Move: Excluded Topics: Intra Broker Data To Move MB: 0 Monitored Partitions Percentage: 100 Num Intra Broker Replica Movements: 0 Num Leader Movements: 0 Num Replica Movements: 26 On Demand Balancedness Score After: 81.8666802863978 On Demand Balancedness Score Before: 78.01176356230222 Recent Windows: 1 Session Id: 05539377-ca7b-45ef-b359-e13564f1458c",
"com.linkedin.kafka.cruisecontrol.exception.OptimizationFailureException: [CpuCapacityGoal] Insufficient capacity for cpu (Utilization 615.21, Allowed Capacity 420.00, Threshold: 0.70). Add at least 3 brokers with the same cpu capacity (100.00) as broker-0. Add at least 3 brokers with the same cpu capacity (100.00) as broker-0.",
"annotate kafkarebalance <kafka_rebalance_resource_name> strimzi.io/rebalance=\"refresh\"",
"get kafkarebalance -o wide -w -n <namespace>",
"annotate kafkarebalance <kafka_rebalance_resource_name> strimzi.io/rebalance=\"approve\"",
"get kafkarebalance -o wide -w -n <namespace>",
"annotate kafkarebalance rebalance-cr-name strimzi.io/rebalance=\"stop\"",
"describe kafkarebalance rebalance-cr-name",
"describe kafkarebalance rebalance-cr-name",
"annotate kafkarebalance rebalance-cr-name strimzi.io/rebalance=\"refresh\"",
"describe kafkarebalance rebalance-cr-name",
"run helper-pod -ti --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 --rm=true --restart=Never -- bash",
"{ \"version\": 1, 1 \"partitions\": [ 2 { \"topic\": \"example-topic-1\", 3 \"partition\": 0, 4 \"replicas\": [1, 2, 3] 5 }, { \"topic\": \"example-topic-1\", \"partition\": 1, \"replicas\": [2, 3, 4] }, { \"topic\": \"example-topic-2\", \"partition\": 0, \"replicas\": [3, 4, 5] } ] }",
"{ \"version\": 1, \"topics\": [ { \"topic\": \"my-topic\"} ] }",
"{ \"version\": 1, \"partitions\": [ { \"topic\": \"example-topic-1\", \"partition\": 0, \"replicas\": [1, 2, 3] \"log_dirs\": [\"/var/lib/kafka/data-0/kafka-log1\", \"any\", \"/var/lib/kafka/data-1/kafka-log2\"] }, { \"topic\": \"example-topic-1\", \"partition\": 1, \"replicas\": [2, 3, 4] \"log_dirs\": [\"any\", \"/var/lib/kafka/data-2/kafka-log3\", \"/var/lib/kafka/data-3/kafka-log4\"] }, { \"topic\": \"example-topic-2\", \"partition\": 0, \"replicas\": [3, 4, 5] \"log_dirs\": [\"/var/lib/kafka/data-4/kafka-log5\", \"any\", \"/var/lib/kafka/data-5/kafka-log6\"] } ] }",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # listeners: # - name: tls port: 9093 type: internal tls: true 1 authentication: type: tls 2 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 3 config: retention.ms: 7200000 segment.bytes: 1073741824 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: 1 type: tls authorization: type: simple 2 acls: # access to the topic - resource: type: topic name: my-topic operations: - Create - Describe - Read - AlterConfigs host: \"*\" # access to the cluster - resource: type: cluster operations: - Alter - AlterConfigs host: \"*\" # #",
"get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.p12}' | base64 -d > ca.p12",
"get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.password}' | base64 -d > ca.password",
"run --restart=Never --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 <interactive_pod_name> -- /bin/sh -c \"sleep 3600\"",
"cp ca.p12 <interactive_pod_name> :/tmp",
"get secret <kafka_user> -o jsonpath='{.data.user\\.p12}' | base64 -d > user.p12",
"get secret <kafka_user> -o jsonpath='{.data.user\\.password}' | base64 -d > user.password",
"cp user.p12 <interactive_pod_name> :/tmp",
"bootstrap.servers= <kafka_cluster_name> -kafka-bootstrap:9093 1 security.protocol=SSL 2 ssl.truststore.location=/tmp/ca.p12 3 ssl.truststore.password= <truststore_password> 4 ssl.keystore.location=/tmp/user.p12 5 ssl.keystore.password= <keystore_password> 6",
"cp config.properties <interactive_pod_name> :/tmp/config.properties",
"{ \"version\": 1, \"topics\": [ { \"topic\": \"my-topic\"} ] }",
"cp topics.json <interactive_pod_name> :/tmp/topics.json",
"exec -n <namespace> -ti <interactive_pod_name> /bin/bash",
"bin/kafka-reassign-partitions.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/config.properties --topics-to-move-json-file /tmp/topics.json --broker-list 0,1,2,3,4 --generate",
"cp reassignment.json <interactive_pod_name> :/tmp/reassignment.json",
"exec -n <namespace> -ti <interactive_pod_name> /bin/bash",
"bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --execute",
"bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --throttle 5000000 --execute",
"bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --throttle 10000000 --execute",
"bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --verify",
"cp reassignment.json <interactive_pod_name> :/tmp/reassignment.json",
"exec -n <namespace> -ti <interactive_pod_name> /bin/bash",
"bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --execute",
"bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --throttle 5000000 --execute",
"bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --throttle 10000000 --execute",
"bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --verify",
"exec my-cluster-kafka-0 -c kafka -it -- /bin/bash -c \"ls -l /var/lib/kafka/kafka-log_<n>_ | grep -E '^d' | grep -vE '[a-zA-Z0-9.-]+\\.[a-z0-9]+-deleteUSD'\"",
"{ \"version\": 1, \"topics\": [ { \"topic\": \"my-topic\"} ] }",
"Current partition replica assignment {\"version\":1,\"partitions\":[{\"topic\":\"my-topic\",\"partition\":0,\"replicas\":[3,4,2,0],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":1,\"replicas\":[0,2,3,1],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":2,\"replicas\":[1,3,0,4],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]}]} Proposed partition reassignment configuration {\"version\":1,\"partitions\":[{\"topic\":\"my-topic\",\"partition\":0,\"replicas\":[0,1,2,3],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":1,\"replicas\":[1,2,3,4],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":2,\"replicas\":[2,3,4,0],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]}]}",
"jq '.partitions[].replicas |= del(.[-1])' reassignment.json > reassignment.json",
"{\"version\":1,\"partitions\":[{\"topic\":\"my-topic\",\"partition\":0,\"replicas\":[0,1,2],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":1,\"replicas\":[1,2,3],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":2,\"replicas\":[2,3,4],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]}]}",
"cp reassignment.json <interactive_pod_name> :/tmp/reassignment.json",
"exec -n <namespace> -ti <interactive_pod_name> /bin/bash",
"bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --execute",
"bin/kafka-reassign-partitions.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --reassignment-json-file /tmp/reassignment.json --verify",
"bin/kafka-topics.sh --bootstrap-server <cluster_name> -kafka-bootstrap:9093 --command-config /tmp/config.properties --describe",
"my-topic Partition: 0 Leader: 0 Replicas: 0,1,2 Isr: 0,1,2 my-topic Partition: 1 Leader: 2 Replicas: 1,2,3 Isr: 1,2,3 my-topic Partition: 2 Leader: 3 Replicas: 2,3,4 Isr: 2,3,4",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 3 replicas: 3",
"metrics ├── grafana-dashboards 1 │ ├── strimzi-cruise-control.json │ ├── strimzi-kafka-bridge.json │ ├── strimzi-kafka-connect.json │ ├── strimzi-kafka-exporter.json │ ├── strimzi-kafka-mirror-maker-2.json │ ├── strimzi-kafka.json │ ├── strimzi-operators.json │ └── strimzi-zookeeper.json ├── grafana-install │ └── grafana.yaml 2 ├── prometheus-additional-properties │ └── prometheus-additional.yaml 3 ├── prometheus-alertmanager-config │ └── alert-manager-config.yaml 4 ├── prometheus-install │ ├── alert-manager.yaml 5 │ ├── prometheus-rules.yaml 6 │ ├── prometheus.yaml 7 │ └── strimzi-pod-monitor.yaml 8 ├── kafka-bridge-metrics.yaml 9 ├── kafka-connect-metrics.yaml 10 ├── kafka-cruise-control-metrics.yaml 11 ├── kafka-metrics.yaml 12 └── kafka-mirror-maker-2-metrics.yaml 13",
"apply -f kafka-metrics.yaml",
"edit kafka <kafka_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # metricsConfig: 1 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: kafka-metrics key: kafka-metrics-config.yml --- kind: ConfigMap 2 apiVersion: v1 metadata: name: kafka-metrics labels: app: strimzi data: kafka-metrics-config.yml: | # metrics configuration",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # kafkaExporter: image: my-registry.io/my-org/my-exporter-cluster:latest 1 groupRegex: \".*\" 2 topicRegex: \".*\" 3 groupExcludeRegex: \"^excluded-.*\" 4 topicExcludeRegex: \"^excluded-.*\" 5 resources: 6 requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi logging: debug 7 enableSaramaLogging: true 8 template: 9 pod: metadata: labels: label1: value1 imagePullSecrets: - name: my-docker-credentials securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120 readinessProbe: 10 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: 11 initialDelaySeconds: 15 timeoutSeconds: 5",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # bootstrapServers: my-cluster-kafka:9092 http: # enableMetrics: true #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # listeners: - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth enableMetrics: true configuration: # authorization: type: keycloak enableMetrics: true #",
"get pods -n openshift-user-workload-monitoring",
"NAME READY STATUS RESTARTS AGE prometheus-operator-5cc59f9bc6-kgcq8 1/1 Running 0 25s prometheus-user-workload-0 5/5 Running 1 14s prometheus-user-workload-1 5/5 Running 1 14s thanos-ruler-user-workload-0 3/3 Running 0 14s thanos-ruler-user-workload-1 3/3 Running 0 14s",
"apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: cluster-operator-metrics labels: app: strimzi spec: selector: matchLabels: strimzi.io/kind: cluster-operator namespaceSelector: matchNames: - <project-name> 1 podMetricsEndpoints: - path: /metrics port: http",
"apply -f strimzi-pod-monitor.yaml -n MY-PROJECT",
"apply -f prometheus-rules.yaml -n MY-PROJECT",
"create sa grafana-service-account -n my-project",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: grafana-cluster-monitoring-binding labels: app: strimzi subjects: - kind: ServiceAccount name: grafana-service-account namespace: my-project roleRef: kind: ClusterRole name: cluster-monitoring-view apiGroup: rbac.authorization.k8s.io",
"apply -f grafana-cluster-monitoring-binding.yaml -n my-project",
"apiVersion: v1 kind: Secret metadata: name: secret-sa annotations: kubernetes.io/service-account.name: \"grafana-service-account\" 1 type: kubernetes.io/service-account-token 2",
"create -f <secret_configuration>.yaml",
"describe sa/grafana-service-account | grep Tokens: describe secret grafana-service-account-token-mmlp9 | grep token:",
"apiVersion: 1 datasources: - name: Prometheus type: prometheus url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 access: proxy basicAuth: false withCredentials: false isDefault: true jsonData: timeInterval: 5s tlsSkipVerify: true httpHeaderName1: \"Authorization\" secureJsonData: httpHeaderValue1: \"Bearer USD{ GRAFANA-ACCESS-TOKEN }\" 1 editable: true",
"create configmap grafana-config --from-file=datasource.yaml -n MY-PROJECT",
"apiVersion: apps/v1 kind: Deployment metadata: name: grafana labels: app: strimzi spec: replicas: 1 selector: matchLabels: name: grafana template: metadata: labels: name: grafana spec: serviceAccountName: grafana-service-account containers: - name: grafana image: grafana/grafana:10.4.2 ports: - name: grafana containerPort: 3000 protocol: TCP volumeMounts: - name: grafana-data mountPath: /var/lib/grafana - name: grafana-logs mountPath: /var/log/grafana - name: grafana-config mountPath: /etc/grafana/provisioning/datasources/datasource.yaml readOnly: true subPath: datasource.yaml readinessProbe: httpGet: path: /api/health port: 3000 initialDelaySeconds: 5 periodSeconds: 10 livenessProbe: httpGet: path: /api/health port: 3000 initialDelaySeconds: 15 periodSeconds: 20 volumes: - name: grafana-data emptyDir: {} - name: grafana-logs emptyDir: {} - name: grafana-config configMap: name: grafana-config --- apiVersion: v1 kind: Service metadata: name: grafana labels: app: strimzi spec: ports: - name: grafana port: 3000 targetPort: 3000 protocol: TCP selector: name: grafana type: ClusterIP",
"apply -f <grafana-application> -n <my-project>",
"create route edge <my-grafana-route> --service=grafana --namespace= KAFKA-NAMESPACE",
"get routes NAME HOST/PORT PATH SERVICES MY-GRAFANA-ROUTE MY-GRAFANA-ROUTE-amq-streams.net grafana",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: # template: mirrorMakerContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: # template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # template: bridgeContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry #",
"apply -f <resource_configuration_file>",
"<dependency> <groupId>io.opentelemetry.semconv</groupId> <artifactId>opentelemetry-semconv</artifactId> <version>1.21.0-alpha</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-otlp</artifactId> <version>1.34.1</version> <exclusions> <exclusion> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-okhttp</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-grpc-managed-channel</artifactId> <version>1.34.1</version> <scope>runtime</scope> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk-extension-autoconfigure</artifactId> <version>1.34.1</version> </dependency> <dependency> <groupId>io.opentelemetry.instrumentation</groupId> <artifactId>opentelemetry-kafka-clients-2.6</artifactId> <version>1.32.0-alpha</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk</artifactId> <version>1.34.1</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-jdk</artifactId> <version>1.34.1-alpha</version> <scope>runtime</scope> </dependency> <dependency> <groupId>io.grpc</groupId> <artifactId>grpc-netty-shaded</artifactId> <version>1.61.0</version> </dependency>",
"OpenTelemetry ot = GlobalOpenTelemetry.get();",
"GlobalTracer.register(tracer);",
"// Producer instance Producer < String, String > op = new KafkaProducer < > ( configs, new StringSerializer(), new StringSerializer() ); Producer < String, String > producer = tracing.wrap(op); KafkaTracing tracing = KafkaTracing.create(GlobalOpenTelemetry.get()); producer.send(...); //consumer instance Consumer<String, String> oc = new KafkaConsumer<>( configs, new StringDeserializer(), new StringDeserializer() ); Consumer<String, String> consumer = tracing.wrap(oc); consumer.subscribe(Collections.singleton(\"mytopic\")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);",
"senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); producer.send(...);",
"consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); consumer.subscribe(Collections.singletonList(\"messages\")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);",
"private class TracingKafkaClientSupplier extends DefaultKafkaClientSupplier { @Override public Producer<byte[], byte[]> getProducer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getProducer(config)); } @Override public Consumer<byte[], byte[]> getConsumer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getConsumer(config)); } @Override public Consumer<byte[], byte[]> getRestoreConsumer(Map<String, Object> config) { return this.getConsumer(config); } @Override public Consumer<byte[], byte[]> getGlobalConsumer(Map<String, Object> config) { return this.getConsumer(config); } }",
"KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer); KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start();",
"props.put(StreamsConfig.PRODUCER_PREFIX + ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); props.put(StreamsConfig.CONSUMER_PREFIX + ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName());",
"io.opentelemetry:opentelemetry-exporter-zipkin",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: # template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-zipkin-service - name: OTEL_EXPORTER_ZIPKIN_ENDPOINT value: http://zipkin-exporter-host-name:9411/api/v2/spans 1 - name: OTEL_TRACES_EXPORTER value: zipkin 2 tracing: type: opentelemetry #",
"//Defines attribute extraction for a producer private static class ProducerAttribExtractor implements AttributesExtractor < ProducerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey(\"prod_start\"), \"prod1\"); } @Override public void onEnd(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey(\"prod_end\"), \"prod2\"); } } //Defines attribute extraction for a consumer private static class ConsumerAttribExtractor implements AttributesExtractor < ConsumerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey(\"con_start\"), \"con1\"); } @Override public void onEnd(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey(\"con_end\"), \"con2\"); } } //Extracts the attributes public static void main(String[] args) throws Exception { Map < String, Object > configs = new HashMap < > (Collections.singletonMap(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, \"localhost:9092\")); System.setProperty(\"otel.traces.exporter\", \"jaeger\"); System.setProperty(\"otel.service.name\", \"myapp1\"); KafkaTracing tracing = KafkaTracing.newBuilder(GlobalOpenTelemetry.get()) .addProducerAttributesExtractors(new ProducerAttribExtractor()) .addConsumerAttributesExtractors(new ConsumerAttribExtractor()) .build();",
"apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration webhooks: - name: strimzi-drain-cleaner.strimzi.io rules: - apiGroups: [\"\"] apiVersions: [\"v1\"] operations: [\"CREATE\"] resources: [\"pods/eviction\"] scope: \"Namespaced\" clientConfig: service: namespace: \"strimzi-drain-cleaner\" name: \"strimzi-drain-cleaner\" path: /drainer port: 443 caBundle: Cg== #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 config: # min.insync.replicas: 2 #",
"apiVersion: apps/v1 kind: Deployment spec: # template: spec: serviceAccountName: strimzi-drain-cleaner containers: - name: strimzi-drain-cleaner # env: - name: STRIMZI_DENY_EVICTION value: \"true\" - name: STRIMZI_DRAIN_KAFKA value: \"true\" - name: STRIMZI_DRAIN_ZOOKEEPER value: \"false\" #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: template: podDisruptionBudget: maxUnavailable: 0 # zookeeper: template: podDisruptionBudget: maxUnavailable: 0 #",
"apply -f <kafka_configuration_file>",
"apply -f ./install/drain-cleaner/openshift",
"get nodes drain <name-of-node> --delete-emptydir-data --ignore-daemonsets --timeout=6000s --force",
"INFO ... Received eviction webhook for Pod my-cluster-zookeeper-2 in namespace my-project INFO ... Pod my-cluster-zookeeper-2 in namespace my-project will be annotated for restart INFO ... Pod my-cluster-zookeeper-2 in namespace my-project found and annotated for restart INFO ... Received eviction webhook for Pod my-cluster-kafka-0 in namespace my-project INFO ... Pod my-cluster-kafka-0 in namespace my-project will be annotated for restart INFO ... Pod my-cluster-kafka-0 in namespace my-project found and annotated for restart",
"INFO PodOperator:68 - Reconciliation #13(timer) Kafka(my-project/my-cluster): Rolling Pod my-cluster-zookeeper-2 INFO PodOperator:68 - Reconciliation #13(timer) Kafka(my-project/my-cluster): Rolling Pod my-cluster-kafka-0 INFO AbstractOperator:500 - Reconciliation #13(timer) Kafka(my-project/my-cluster): reconciled",
"apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-drain-cleaner labels: app: strimzi-drain-cleaner namespace: strimzi-drain-cleaner spec: # spec: serviceAccountName: strimzi-drain-cleaner containers: - name: strimzi-drain-cleaner # env: - name: STRIMZI_DRAIN_KAFKA value: \"true\" - name: STRIMZI_DRAIN_ZOOKEEPER value: \"true\" - name: STRIMZI_CERTIFICATE_WATCH_ENABLED value: \"true\" - name: STRIMZI_CERTIFICATE_WATCH_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_CERTIFICATE_WATCH_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name #",
"./report.sh --namespace=<cluster_namespace> --cluster=<cluster_name> --out-dir=<local_output_directory>",
"./report.sh --namespace=my-amq-streams-namespace --cluster=my-kafka-cluster --bridge=my-bridge-component --secrets=all --out-dir=~/reports",
"env: - name: STRIMZI_FEATURE_GATES value: -ControlPlaneListener",
"env: - name: STRIMZI_FEATURE_GATES value: +ControlPlaneListener",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 config: # min.insync.replicas: 2 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # template: podDisruptionBudget: maxUnavailable: 0",
"annotate pod my-cluster-pool-a-1 strimzi.io/manual-rolling-update=\"true\"",
"sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml",
"sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml",
"replace -f install/cluster-operator",
"get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'",
"registry.redhat.io/amq-streams/strimzi-kafka-37-rhel9:2.7.0",
"get kafka <kafka_cluster_name> -n <namespace> -o jsonpath='{.status.conditions}'",
"edit kafka <kafka_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 metadataVersion: 3.6-IV2 version: 3.6.0 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 metadataVersion: 3.6-IV2 1 version: 3.7.0 2 #",
"get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 metadataVersion: 3.7-IV2 version: 3.7.0 #",
"edit kafka <kafka_configuration_file>",
"kind: Kafka spec: # kafka: version: 3.6.0 config: log.message.format.version: \"3.6\" inter.broker.protocol.version: \"3.6\" #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 3.7.0 1 config: log.message.format.version: \"3.6\" 2 inter.broker.protocol.version: \"3.6\" 3 #",
"get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 3.7.0 config: log.message.format.version: \"3.6\" inter.broker.protocol.version: \"3.7\" #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 3.7.0 config: log.message.format.version: \"3.7\" inter.broker.protocol.version: \"3.7\" #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: spec: # status: # kafkaVersion: 3.7.0 operatorLastSuccessfulVersion: 2.7 kafkaMetadataVersion: 3.7",
"sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml",
"sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml",
"replace -f install/cluster-operator",
"get pod my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'",
"edit kafka <kafka_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 metadataVersion: 3.6-IV2 1 version: 3.7.0 2 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 metadataVersion: 3.6-IV2 1 version: 3.6.0 2 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 3.7.0 config: log.message.format.version: \"3.6\" #",
"edit kafka <kafka_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # kafka: version: 3.7.0 1 config: inter.broker.protocol.version: \"3.6\" 2 log.message.format.version: \"3.6\" #",
"get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # kafka: version: 3.6.0 1 config: inter.broker.protocol.version: \"3.6\" 2 log.message.format.version: \"3.6\" #",
"run kafka-admin -ti --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete",
"get <resource_type> --all-namespaces | grep <kafka_cluster_name>",
"delete subscription amq-streams -n openshift-operators",
"delete csv amqstreams. <version> -n openshift-operators",
"get crd -l app=strimzi -o name | xargs oc delete",
"get <resource_type> --all-namespaces | grep <kafka_cluster_name>",
"delete -f install/cluster-operator",
"delete <resource_type> <resource_name> -n <namespace>",
"delete secret my-cluster-clients-ca-cert -n my-project",
"-n kafka get events --field-selector reportingController=strimzi.io/cluster-operator",
"LAST SEEN TYPE REASON OBJECT MESSAGE 2m Normal CaCertRenewed pod/strimzi-cluster-kafka-0 CA certificate renewed 58m Normal PodForceRestartOnError pod/strimzi-cluster-kafka-1 Pod needs to be forcibly restarted due to an error 5m47s Normal ManualRollingUpdate pod/strimzi-cluster-kafka-2 Pod was manually annotated to be rolled",
"-n kafka get events --field-selector reportingController=strimzi.io/cluster-operator,reason=PodForceRestartOnError",
"-n kafka get events --field-selector reportingController=strimzi.io/cluster-operator,reason=PodForceRestartOnError -o yaml",
"apiVersion: v1 items: - action: StrimziInitiatedPodRestart apiVersion: v1 eventTime: \"2022-05-13T00:22:34.168086Z\" firstTimestamp: null involvedObject: kind: Pod name: strimzi-cluster-kafka-1 namespace: kafka kind: Event lastTimestamp: null message: Pod needs to be forcibly restarted due to an error metadata: creationTimestamp: \"2022-05-13T00:22:34Z\" generateName: strimzi-event name: strimzi-eventwppk6 namespace: kafka resourceVersion: \"432961\" uid: 29fcdb9e-f2cf-4c95-a165-a5efcd48edfc reason: PodForceRestartOnError reportingController: strimzi.io/cluster-operator reportingInstance: strimzi-cluster-operator-6458cfb4c6-6bpdp source: {} type: Normal kind: List metadata: resourceVersion: \"\" selfLink: \"\"",
"maintenanceTimeWindows: - \"* * 0-1 ? * SUN,MON,TUE,WED,THU *\"",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # maintenanceTimeWindows: - \"* * 8-10 * * ?\" - \"* * 14-15 * * ?\"",
"apply -f <kafka_configuration_file>",
"annotate strimzipodset <cluster_name>-kafka strimzi.io/manual-rolling-update=\"true\" annotate strimzipodset <cluster_name>-zookeeper strimzi.io/manual-rolling-update=\"true\" annotate strimzipodset <cluster_name>-connect strimzi.io/manual-rolling-update=\"true\" annotate strimzipodset <cluster_name>-mirrormaker2 strimzi.io/manual-rolling-update=\"true\"",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 config: # min.insync.replicas: 2 #",
"annotate pod <cluster_name>-kafka-<index_number> strimzi.io/manual-rolling-update=\"true\" annotate pod <cluster_name>-zookeeper-<index_number> strimzi.io/manual-rolling-update=\"true\" annotate pod <cluster_name>-connect-<index_number> strimzi.io/manual-rolling-update=\"true\" annotate pod <cluster_name>-mirrormaker2-<index_number> strimzi.io/manual-rolling-update=\"true\"",
"apiVersion: v1 kind: PersistentVolume spec: # persistentVolumeReclaimPolicy: Retain",
"apiVersion: v1 kind: StorageClass metadata: name: gp2-retain parameters: # reclaimPolicy: Retain",
"apiVersion: v1 kind: PersistentVolume spec: # storageClassName: gp2-retain",
"get pv",
"NAME RECLAIMPOLICY CLAIM pvc-5e9c5c7f-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-my-cluster-zookeeper-1 pvc-5e9cc72d-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-0 pvc-5ead43d1-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-2 pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-0-my-cluster-kafka-0 pvc-7e21042e-3317-11ea-9786-02deaf9aa87e ... Retain ... myproject/data-0-my-cluster-kafka-1 pvc-7e226978-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-0-my-cluster-kafka-2",
"create namespace myproject",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-0-my-cluster-kafka-0 spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: gp2-retain volumeMode: Filesystem volumeName: pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c",
"apiVersion: v1 kind: PersistentVolume metadata: annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner pv.kubernetes.io/bound-by-controller: \"yes\" pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs creationTimestamp: \"<date>\" finalizers: - kubernetes.io/pv-protection labels: failure-domain.beta.kubernetes.io/region: eu-west-1 failure-domain.beta.kubernetes.io/zone: eu-west-1c name: pvc-7e226978-3317-11ea-97b0-0aef8816c7ea resourceVersion: \"39431\" selfLink: /api/v1/persistentvolumes/pvc-7e226978-3317-11ea-97b0-0aef8816c7ea uid: 7efe6b0d-3317-11ea-a650-06e1eadd9a4c spec: accessModes: - ReadWriteOnce awsElasticBlockStore: fsType: xfs volumeID: aws://eu-west-1c/vol-09db3141656d1c258 capacity: storage: 100Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: \"39113\" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: failure-domain.beta.kubernetes.io/zone operator: In values: - eu-west-1c - key: failure-domain.beta.kubernetes.io/region operator: In values: - eu-west-1 persistentVolumeReclaimPolicy: Retain storageClassName: gp2-retain volumeMode: Filesystem",
"claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: \"39113\" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea",
"create -f install/cluster-operator -n my-project",
"apply -f kafka.yaml",
"run kafka-admin -ti --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: topicOperator: {} 1 #",
"get KafkaTopic",
"2018-03-04 17:09:24 WARNING AbstractClusterOperations:290 - Failed to acquire lock for kafka cluster lock::kafka::myproject::my-cluster",
"Caused by: java.security.cert.CertificateException: No subject alternative names matching IP address 168.72.15.231 found at sun.security.util.HostnameChecker.matchIP(HostnameChecker.java:168) at sun.security.util.HostnameChecker.match(HostnameChecker.java:94) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:436) at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:252) at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136) at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1501) ... 17 more",
"ssl.endpoint.identification.algorithm=",
"props.put(\"ssl.endpoint.identification.algorithm\", \"\");",
"com.company=Red_Hat rht.prod_name=Red_Hat_Application_Foundations rht.prod_ver=2024.Q2 rht.comp=AMQ_Streams rht.comp_ver=2.7 rht.subcomp=entity-operator rht.subcomp_t=infrastructure",
"com.company=Red_Hat rht.prod_name=Red_Hat_Application_Foundations rht.prod_ver=2024.Q2 rht.comp=AMQ_Streams rht.comp_ver=2.7 rht.subcomp=kafka-bridge rht.subcomp_t=application",
"dnf install <package_name>",
"dnf install <path_to_download_package>"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html-single/deploying_and_managing_streams_for_apache_kafka_on_openshift/%7Bsupported-configurations%7D |
Chapter 2. Defining the Default Configuration | Chapter 2. Defining the Default Configuration When creating and configuring an Overcloud without an external load balancer, the director configures HAProxy to distribute traffic to multiple OpenStack services. The director provides this configuration in the /etc/haproxy/haproxy.conf file on each Controller node. The default configuration contains three main parts: global, defaults, and multiple service configurations. The few sections examine the default parameters from each configuration section. This provides an example of the configuration settings for installing and configuring your external load balancer. Note that these parameters are only a fraction of the total HAProxy parameters. For details about these and other parameters, see the "HAProxy Configuration Manual" located in /usr/share/doc/haproxy-*/configuration.txt on the Controller nodes (or any system where the haproxy package is installed). 2.1. Global Configuration This section defines a set of process-wide parameters. This includes the following: daemon : Run as a background process. user haproxy , group haproxy : Defines the Linux user and group that owns the process. log : Defines syslog server to use. maxconn : Sets the maximum number of concurrent connections to the process. pidfile : Sets file to use for the process IDs. 2.2. Defaults Configuration This section defines a default set of parameters for each service. This includes the following: log : Enables logging for the service. The global value means that the logging functions use the log parameters in the global section. mode : Sets the protocol to use. In this case, the default is TCP. retries : Sets the number of retries to perform on a server before reporting a connection failure. timeout : Sets the maximum time to wait for a particular function. For example, timeout http-request sets the maximum time to wait for a complete HTTP request. 2.3. Services Configuration There are multiple service configuration sections in the default file. Each service configuration includes the following: listen : The name of the service listening for requests bind : The IP address and TCP port number the on which the service listens server : The name of each server providing the service, the server's IP address and listening port, and other information. The example above shows the HAProxy settings for the ceilometer service. This services identifies the IP addresses and ports on which the ceilometer service is offered (port 8777 on 172.16.20.2500 and 172.16.23.250). HAProxy directs the requests made for those addresses to overcloud-controller-0 (172.16.20.150:8777), overcloud-controller-1 (172.16.20.151:8777), or overcloud-controller-2 (172.16.0.152:8777). In addition, the example server parameters enable the following: check : Enables health checks fall 5 : After five failed health checks, the service is considered dead. inter 2000 : The interval between two consecutive health checks set to 2000 milliseconds (or 2 seconds). rise 2 : After two successful health checks, a server is considered operational. Each service binds to different addresses, representing different network traffic types. Also some services contain additional configuration options. The chapter examines each specific service configuration so that you can replicate these details on your external load balancer. | [
"global daemon group haproxy log /dev/log local0 maxconn 10000 pidfile /var/run/haproxy.pid user haproxy",
"defaults log global mode tcp retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout check 10s",
"listen ceilometer bind 172.16.20.250:8777 bind 172.16.23.250:8777 server overcloud-controller-0 172.16.20.150:8777 check fall 5 inter 2000 rise 2 server overcloud-controller-1 172.16.20.151:8777 check fall 5 inter 2000 rise 2 server overcloud-controller-2 172.16.20.152:8777 check fall 5 inter 2000 rise 2"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/external_load_balancing_for_the_overcloud/defining_the_default_configuration |
Chapter 9. Logging | Chapter 9. Logging Logging is important in troubleshooting and debugging. By default logging is turned off. To enable logging, you must set a logging level and provide a delegate function to receive the log messages. 9.1. Setting the log output level The library emits log traces at different levels: Error Warning Information Verbose The lowest log level, Error , traces only error events and produces the fewest log messages. A higher log level includes all the log levels below it and generates a larger volume of log messages. 9.2. Enabling protocol logging The Log level Frame is handled differently. Setting trace level Frame enables tracing outputs for AMQP protocol headers and frames. Tracing at one of the other log levels must be logically ORed with Frame to get normal tracing output and AMQP frame tracing at the same time. For example The following code writes AMQP frames to the console. Example: Logging delegate Trace.TraceLevel = TraceLevel.Frame; Trace.TraceListener = (f, a) => Console.WriteLine( DateTime.Now.ToString("[hh:mm:ss.fff]") + " " + string.Format(f, a)); | [
"// Enable Error logs only. Trace.TraceLevel = TraceLevel.Error",
"// Enable Verbose logs. This includes logs at all log levels. Trace.TraceLevel = TraceLevel.Verbose",
"// Enable just AMQP frame tracing Trace.TraceLevel = TraceLevel.Frame;",
"// Enable AMQP Frame logs, and Warning and Error logs Trace.TraceLevel = TraceLevel.Frame | TraceLevel.Warning;",
"Trace.TraceLevel = TraceLevel.Frame; Trace.TraceListener = (f, a) => Console.WriteLine( DateTime.Now.ToString(\"[hh:mm:ss.fff]\") + \" \" + string.Format(f, a));"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_.net_client/logging |
Cluster administration | Cluster administration OpenShift Dedicated 4 Configuring OpenShift Dedicated clusters Red Hat OpenShift Documentation Team | [
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": \"sts:AssumeRole\", \"Resource\": \"*\" } ] }",
"ocm create machine-pool --cluster <cluster_name|cluster_id> \\ 1 --instance-type <instance_type> \\ 2 --replicas <number_of_replicas> \\ 3 --availability-zone <availability_zone> \\ 4 [flags] \\ 5 <machine_pool_id> 6"
] | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html-single/cluster_administration/index |
8.5. Yum Plug-ins | 8.5. Yum Plug-ins Yum provides plug-ins that extend and enhance its operations. Certain plug-ins are installed by default. Yum always informs you which plug-ins, if any, are loaded and active whenever you call any yum command. For example: Note that the plug-in names which follow Loaded plugins are the names you can provide to the --disableplugins= plugin_name option. 8.5.1. Enabling, Configuring, and Disabling Yum Plug-ins To enable Yum plug-ins, ensure that a line beginning with plugins= is present in the [main] section of /etc/yum.conf , and that its value is 1 : You can disable all plug-ins by changing this line to plugins=0 . Important Disabling all plug-ins is not advised because certain plug-ins provide important Yum services. In particular, rhnplugin provides support for RHN Classic , and product-id and subscription-manager plug-ins provide support for the certificate-based Content Delivery Network ( CDN ). Disabling plug-ins globally is provided as a convenience option, and is generally only recommended when diagnosing a potential problem with Yum . Every installed plug-in has its own configuration file in the /etc/yum/pluginconf.d/ directory. You can set plug-in specific options in these files. For example, here is the refresh-packagekit plug-in's refresh-packagekit.conf configuration file: Plug-in configuration files always contain a [main] section (similar to Yum's /etc/yum.conf file) in which there is (or you can place if it is missing) an enabled= option that controls whether the plug-in is enabled when you run yum commands. If you disable all plug-ins by setting enabled=0 in /etc/yum.conf , then all plug-ins are disabled regardless of whether they are enabled in their individual configuration files. If you merely want to disable all Yum plug-ins for a single yum command, use the --noplugins option. If you want to disable one or more Yum plug-ins for a single yum command, add the --disableplugin= plugin_name option to the command. For example, to disable the presto plug-in while updating a system, type: The plug-in names you provide to the --disableplugin= option are the same names listed after the Loaded plugins line in the output of any yum command. You can disable multiple plug-ins by separating their names with commas. In addition, you can match multiple plug-in names or shorten long ones by using glob expressions: | [
"~]# yum info yum Loaded plugins: product-id, refresh-packagekit, subscription-manager [output truncated]",
"plugins=1",
"[main] enabled=1",
"~]# yum update --disableplugin=presto",
"~]# yum update --disableplugin=presto,refresh-pack*"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-yum_plugins |
Registry | Registry OpenShift Container Platform 4.13 Configuring registries for OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/registry/index |
4.9 Release notes | 4.9 Release notes Red Hat OpenShift Data Foundation 4.9 Release notes for feature and enhancements, known issues, and other important release information. Red Hat Storage Documentation Team Abstract The release notes for Red Hat OpenShift Data Foundation 4.9 summarizes all new features and enhancements, notable technical changes, and any known bugs upon general availability. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/4.9_release_notes/index |
Chapter 1. Migrating your IdM environment from RHEL 7 servers to RHEL 8 servers | Chapter 1. Migrating your IdM environment from RHEL 7 servers to RHEL 8 servers To upgrade a RHEL 7 IdM environment to RHEL 8, you must first add new RHEL 8 IdM replicas to your RHEL 7 IdM environment, and then retire the RHEL 7 servers. Warning Performing an in-place upgrade of RHEL 7 IdM servers and IdM server nodes to RHEL 8 is not supported. Migrating directly to RHEL 8 from RHEL 6 or earlier versions is not supported. To properly update your IdM data, you must perform incremental migrations. For example, to migrate a RHEL 6 IdM environment to RHEL 8: Migrate from RHEL 6 servers to RHEL 7 servers. See Migrating Identity Management from Red Hat Enterprise Linux 6 to Version 7 . Migrate from RHEL 7 servers to RHEL 8 servers, as described in this section. Important RHEL 8 supports SPAKE and IdP pre-authentication, but RHEL 7 does not. Having RHEL 8 servers with SPAKE or IdP enabled in a RHEL 7 IdM deployment may lead to problems such as users not being able to log in. Therefore, migrate all servers in an IdM deployment as quickly as possible. For more information, see: https://access.redhat.com/solutions/7053377 https://access.redhat.com/solutions/3529911 This procedure describes how to migrate all Identity Management (IdM) data and configuration from a Red Hat Enterprise Linux (RHEL) 7 server to a RHEL 8 server. You can also use this procedure to migrate from FreeIPA servers on non-RHEL Linux distributions to IdM on RHEL 8 servers. The migration procedure includes: Configuring a RHEL 8 IdM server and adding it as a replica to your current RHEL 7 IdM environment. For details, see Installing the RHEL 8 Replica . Making the RHEL 8 server the certificate authority (CA) renewal server. For details, see Assigning the CA renewal server role to the RHEL 8 IdM server . Stopping the generation of the certificate revocation list (CRL) on the RHEL 7 server and redirecting CRL requests to RHEL 8. For details, see Stopping CRL generation on a RHEL 7 IdM CA server . Starting the generation of the CRL on the RHEL 8 server. For details, see Starting CRL generation on the new RHEL 8 IdM CA server . Stopping and decommissioning the original RHEL 7 CA renewal server. For details, see Stopping and decommissioning the RHEL 7 server . In the following procedures: rhel8.example.com is the RHEL 8 system that will become the new CA renewal server. rhel7.example.com is the original RHEL 7 CA renewal server. To identify which Red Hat Enterprise Linux 7 server is the CA renewal server, run the following command on any IdM server: If your IdM deployment does not use a certificate authority (CA), any IdM server running on RHEL 7 can be rhel7.example.com . Note Complete the steps in the following sections only if your IdM deployment uses an embedded CA: Assigning the CA renewal server role to the RHEL 8 IdM server Stopping CRL generation on a RHEL 7 IdM CA server Starting CRL generation on the new RHEL 8 IdM CA server 1.1. Preparing for migrating IdM from RHEL 7 to RHEL 8 On rhel7.example.com : Upgrade the system to the latest RHEL 7 version. Ensure that the domain level for your domain is set to 1. For more information, see Displaying and Raising the Domain Level in the Linux Domain Identity, Authentication, and Policy Guide for RHEL 7. Update the ipa- * packages to their latest version: Warning When upgrading multiple Identity Management (IdM) servers, wait at least 10 minutes between each upgrade. When two or more servers are upgraded simultaneously or with only short intervals between the upgrades, there is not enough time to replicate the post-upgrade data changes throughout the topology, which can result in conflicting replication events. On rhel8.example.com : Install the latest version of Red Hat Enterprise Linux on the system. For more information, see Interactively installing RHEL from installation media . Identify the time server rhel7.example.com is synchronized with: Important In RHEL 8, IdM does not provide its own time server: the installation of IdM on rhel8.example.com does not result in the installation of an NTP server on the host. Therefore, you need to use a separate NTP server, for example ntp.example.com . For more information, see Migrating to chrony and Time service requirements for IdM . While rhel7.example.com can be used in an NTP server role, you will decommission the server as part of the migration process. Therefore, rhel8.example.com needs to be synchronized directly with ntp.example.com instead. You can specify this during the client installation process. Enroll the system as an IdM client into the domain for which rhel7.example.com IdM server is authoritative. For more information, see Installing an IdM client . When installing the client, specify the time server from the step: If you are using a pool of NTP servers, use the --ntp-pool option. If you do not specify an NTP server manually, it will be automatically set from DNS records. This can lead to rhel8.example.com synchronizing with rhel7.example.com . This will cause issues when the RHEL 7 server is decommissioned. If the RHEL8 system is already properly configured as an NTP client, you can use the --no-ntp option when performing the IdM client installation. Important Do not use single-label domain names, for example .company . Starting with RHEL 8, IDM does not accept single-labeled domain names and the domain name must be composed of one or more subdomains and a top level domain, for example example.com or company.example.com . If the existing domain is single-labeled, it is not possible to perform the migration using these instructions. In these cases, use Migrating an LDAP Server to Identity Management . Prepare the system for IdM server installation. See Preparing the system for IdM server installation . Authorize the system for the installation of an IdM replica. See Authorizing the installation of a replica on an IdM client . Update the ipa- * packages to their latest version: Additional resources Planning your CA services Planning your DNS services and host names Planning a cross-forest trust between IdM and AD Installing packages required for an IdM server . Upgrading from RHEL 7 to RHEL 8 . 1.2. Installing the RHEL 8 replica List which server roles are present in your RHEL 7 environment: Optional: If you want to use the same per-server forwarders for rhel8.example.com that rhel7.example.com is using, view the per-server forwarders for rhel7.example.com : Install the IdM server on rhel8.example.com as a replica of the IdM RHEL 7 server, including all the server roles present on your rhel7.example.com except the NTP server role. To install the roles from the example above, use these options with the ipa-replica-install command: --setup-ca to set up the Certificate System component --setup-dns and --forwarder to configure an integrated DNS server and set a per-server forwarder to take care of DNS queries that go outside the IdM domain Note Additionally, if your IdM deployment is in a trust relationship with Active Directory (AD), add the --setup-adtrust option to the ipa-replica-install command to configure AD trust capability on rhel8.example.com . To set up an IdM server with the IP address of 192.0.2.1 that uses a per-server forwarder with the IP address of 192.0.2.20: You do not need to specify the RHEL 7 IdM server itself because if DNS is working correctly, rhel8.example.com will find it using DNS autodiscovery. Optional: Add an _ntp._udp service (SRV) record for your external NTP time server to the DNS of the newly-installed IdM server, rhel8.example.com . Doing this is recommended because IdM in RHEL 8 does not provide its own time service. The presence of the SRV record for the time server in IdM DNS ensures that future RHEL 8 replica and client installations are automatically configured to synchronize with the time server used by rhel8.example.com . This is because ipa-client-install looks for the _ntp._udp DNS entry unless --ntp-server or --ntp-pool options are provided on the install command-line interface (CLI). Verification Verify that the IdM services are running on rhel8.example.com : Verify that server roles for rhel8.example.com are the same as for rhel7.example.com except the NTP server role: Optional: Display details about the replication agreement between rhel7.example.com and rhel8.example.com : Optional: If your IdM deployment is in a trust relationship with AD, verify that it is working: link: Verify the Kerberos configuration Attempt to resolve an AD user on rhel8.example.com : Verify that rhel8.example.com is synchronized with the NTP server: Additional resources DNS configuration priorities Time service requirements for IdM Migrating to chrony 1.3. Assigning the CA renewal server role to the RHEL 8 IdM server Follow this procedure to make the RHEL 8 server the certificate authority (CA) renewal server. Note Follow these steps only if your IdM deployment uses an embedded certificate authority (CA). On rhel8.example.com , configure rhel8.example.com as the new CA renewal server: Configure rhel8.example.com to handle CA subsystem certificate renewal: The output confirms that the update was successful. On rhel8.example.com , enable the certificate updater task: Open the /etc/pki/pki-tomcat/ca/CS.cfg configuration file for editing. Remove the ca.certStatusUpdateInterval entry, or set it to the desired interval in seconds. The default value is 600 . Save and close the /etc/pki/pki-tomcat/ca/CS.cfg configuration file. Restart IdM services: On rhel7.example.com , disable the certificate updater task: Open the /etc/pki/pki-tomcat/ca/CS.cfg configuration file for editing. Change ca.certStatusUpdateInterval to 0 , or add the following entry if it does not exist: Save and close the /etc/pki/pki-tomcat/ca/CS.cfg configuration file. Restart IdM services: 1.4. Stopping CRL generation on a RHEL 7 IdM CA server Note Follow these steps only if your IdM deployment uses an embedded certificate authority (CA). Follow this procedure to stop generating the Certificate Revocation List (CRL) on the rhel7.example.com CA server using the ipa-crlgen-manage command. Prerequisites You must be logged in as root. Procedure Optional: Check if rhel7.example.com is generating the CRL: Stop generating the CRL on the rhel7.example.com server: Verification Check if the rhel7.example.com server stopped generating the CRL: The rhel7.example.com server stopped generating the CRL. The step is to enable generating the CRL on rhel8.example.com . 1.5. Starting CRL generation on the new RHEL 8 IdM CA server Note Follow these steps only if your IdM deployment uses an embedded certificate authority (CA). Prerequisites You must be logged in as root on the rhel8.example.com machine. Procedure To start generating CRL on rhel8.example.com , use the ipa-crlgen-manage enable command: To check if CRL generation is enabled, use the ipa-crlgen-manage status command: 1.6. Stopping and decommissioning the RHEL 7 server Ensure that all data, including the latest changes, have been correctly migrated from rhel7.example.com to rhel8.example.com . For example: Add a new user on rhel7.example.com : Check that the user has been replicated to rhel8.example.com : Ensure that a Distributed Numeric Assignment (DNA) ID range is allocated to rhel8.example.com . Use one of the following methods: Activate the DNA plug-in on rhel8.example.com directly by creating another test user: Assign a specific DNA ID range to rhel8.example.com : On rhel7.example.com , display the IdM ID range: On rhel7.example.com , display the allocated DNA ID ranges: Reduce the DNA ID range allocated to rhel7.example.com so that a section becomes available to rhel8.example.com : Assign the remaining part of the IdM ID range to rhel8.example.com : Stop all IdM services on rhel7.example.com to force domain discovery to the new rhel8.example.com server. After this, the ipa utility will contact the new server through a remote procedure call (RPC). Remove the RHEL 7 server from the topology by executing the removal commands on the RHEL 8 server. For details, see Uninstalling an IdM server . Additional resources Adjusting ID ranges manually | [
"ipa config-show | grep \"CA renewal\" IPA CA renewal master: rhel7.example.com",
"yum update ipa- *",
"ntpstat synchronised to NTP server ( ntp.example.com ) at stratum 3 time correct to within 42 ms polling server every 1024 s",
"ipa-client-install --mkhomedir --ntp-server ntp.example.com",
"yum update ipa- *",
"ipa server-role-find --status enabled --server rhel7.example.com ---------------------- 3 server roles matched ---------------------- Server name: rhel7.example.com Role name: CA server Role status: enabled Server name: rhel7.example.com Role name: DNS server Role status: enabled Server name: rhel7.example.com Role name: NTP server Role status: enabled [... output truncated ...]",
"ipa dnsserver-show rhel7.example.com ----------------------------- 1 DNS server matched ----------------------------- Server name: rhel7.example.com SOA mname: rhel7.example.com. Forwarders: 192.0.2.20 Forward policy: only -------------------------------------------------- Number of entries returned 1 --------------------------------------------------",
"ipa-replica-install --setup-ca --ip-address 192.0.2.1 --setup-dns --forwarder 192.0.2.20",
"ipactl status Directory Service: RUNNING [... output truncated ...] ipa: INFO: The ipactl command was successful",
"[root@rhel8 ~]USD kinit admin [root@rhel8 ~]USD ipa server-role-find --status enabled --server rhel8.example.com ---------------------- 2 server roles matched ---------------------- Server name: rhel8.example.com Role name: CA server Role status: enabled Server name: rhel8.example.com Role name: DNS server Role status: enabled",
"ipa-csreplica-manage list --verbose rhel8.example.com Directory Manager password: rhel7.example.com last init status: None last init ended: 1970-01-01 00:00:00+00:00 last update status: Error (0) Replica acquired successfully: Incremental update succeeded last update ended: 2019-02-13 13:55:13+00:00",
"id [email protected]",
"chronyc tracking Reference ID : CB00710F ( ntp.example.com ) Stratum : 3 Ref time (UTC) : Tue Nov 16 09:49:17 2021 [... output truncated ...]",
"ipa config-mod --ca-renewal-master-server rhel8.example.com IPA masters: rhel7.example.com, rhel8.example.com IPA CA servers: rhel7.example.com, rhel8.example.com IPA NTP servers: rhel7.example.com, rhel8.example.com IPA CA renewal master: rhel8.example.com",
"[user@rhel8 ~]USD ipactl restart",
"ca.certStatusUpdateInterval=0",
"[user@rhel7 ~]USD ipactl restart",
"ipa-crlgen-manage status CRL generation: enabled Last CRL update: 2019-10-31 12:00:00 Last CRL Number: 6 The ipa-crlgen-manage command was successful",
"ipa-crlgen-manage disable Stopping pki-tomcatd Editing /var/lib/pki/pki-tomcat/conf/ca/CS.cfg Starting pki-tomcatd Editing /etc/httpd/conf.d/ipa-pki-proxy.conf Restarting httpd CRL generation disabled on the local host. Please make sure to configure CRL generation on another master with ipa-crlgen-manage enable. The ipa-crlgen-manage command was successful",
"ipa-crlgen-manage status",
"ipa-crlgen-manage enable Stopping pki-tomcatd Editing /var/lib/pki/pki-tomcat/conf/ca/CS.cfg Starting pki-tomcatd Editing /etc/httpd/conf.d/ipa-pki-proxy.conf Restarting httpd Forcing CRL update CRL generation enabled on the local host. Please make sure to have only a single CRL generation master. The ipa-crlgen-manage command was successful",
"ipa-crlgen-manage status CRL generation: enabled Last CRL update: 2019-10-31 12:10:00 Last CRL Number: 7 The ipa-crlgen-manage command was successful",
"ipa user-add random_user First name: random Last name: user",
"ipa user-find random_user -------------- 1 user matched -------------- User login: random_user First name: random Last name: user",
"ipa user-add another_random_user First name: another Last name: random_user",
"ipa idrange-find ---------------- 3 ranges matched ---------------- Range name: EXAMPLE.COM_id_range First Posix ID of the range: 196600000 Number of IDs in the range: 200000 First RID of the corresponding RID range: 1000 First RID of the secondary RID range: 100000000 Range type: local domain range",
"ipa-replica-manage dnarange-show rhel7.example.com: 196600026-196799999 rhel8.example.com: No range set",
"ipa-replica-manage dnarange-set rhel7.example.com 196600026-196699999",
"ipa-replica-manage dnarange-set rhel8.example.com 196700000-196799999",
"ipactl stop Stopping CA Service Stopping pki-ca: [ OK ] Stopping HTTP Service Stopping httpd: [ OK ] Stopping MEMCACHE Service Stopping ipa_memcached: [ OK ] Stopping DNS Service Stopping named: . [ OK ] Stopping KPASSWD Service Stopping Kerberos 5 Admin Server: [ OK ] Stopping KDC Service Stopping Kerberos 5 KDC: [ OK ] Stopping Directory Service Shutting down dirsrv: EXAMPLE-COM... [ OK ] PKI-IPA... [ OK ]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/migrating_to_identity_management_on_rhel_8/migrate-7-to-8_migrating |
Appendix C. Configuring a Host for PCI Passthrough | Appendix C. Configuring a Host for PCI Passthrough Note This is one in a series of topics that show how to set up and configure SR-IOV on Red Hat Virtualization. For more information, see Setting Up and Configuring SR-IOV Enabling PCI passthrough allows a virtual machine to use a host device as if the device were directly attached to the virtual machine. To enable the PCI passthrough function, you must enable virtualization extensions and the IOMMU function. The following procedure requires you to reboot the host. If the host is attached to the Manager already, ensure you place the host into maintenance mode first. Prerequisites Ensure that the host hardware meets the requirements for PCI device passthrough and assignment. See PCI Device Requirements for more information. Configuring a Host for PCI Passthrough Enable the virtualization extension and IOMMU extension in the BIOS. See Enabling Intel VT-x and AMD-V virtualization hardware extensions in BIOS in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide for more information. Enable the IOMMU flag in the kernel by selecting the Hostdev Passthrough & SR-IOV check box when adding the host to the Manager or by editing the grub configuration file manually. To enable the IOMMU flag from the Administration Portal, see Adding Standard Hosts to the Red Hat Virtualization Manager and Kernel Settings Explained . To edit the grub configuration file manually, see Enabling IOMMU Manually . For GPU passthrough, you need to run additional configuration steps on both the host and the guest system. See GPU device passthrough: Assigning a host GPU to a single virtual machine in Setting up an NVIDIA GPU for a virtual machine in Red Hat Virtualization for more information. Enabling IOMMU Manually Enable IOMMU by editing the grub configuration file. Note If you are using IBM POWER8 hardware, skip this step as IOMMU is enabled by default. For Intel, boot the machine, and append intel_iommu=on to the end of the GRUB_CMDLINE_LINUX line in the grub configuration file. For AMD, boot the machine, and append amd_iommu=on to the end of the GRUB_CMDLINE_LINUX line in the grub configuration file. Note If intel_iommu=on or amd_iommu=on works, you can try adding iommu=pt or amd_iommu=pt . The pt option only enables IOMMU for devices used in passthrough and provides better host performance. However, the option might not be supported on all hardware. Revert to option if the pt option doesn't work for your host. If the passthrough fails because the hardware does not support interrupt remapping, you can consider enabling the allow_unsafe_interrupts option if the virtual machines are trusted. The allow_unsafe_interrupts is not enabled by default because enabling it potentially exposes the host to MSI attacks from virtual machines. To enable the option: Refresh the grub.cfg file and reboot the host for these changes to take effect: To enable SR-IOV and assign dedicated virtual NICs to virtual machines, see https://access.redhat.com/articles/2335291 . | [
"vi /etc/default/grub GRUB_CMDLINE_LINUX=\"nofb splash=quiet console=tty0 ... intel_iommu=on",
"vi /etc/default/grub GRUB_CMDLINE_LINUX=\"nofb splash=quiet console=tty0 ... amd_iommu=on",
"vi /etc/modprobe.d options vfio_iommu_type1 allow_unsafe_interrupts=1",
"grub2-mkconfig -o /boot/grub2/grub.cfg",
"reboot"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_standalone_manager_with_remote_databases/configuring_a_host_for_pci_passthrough_sm_remotedb_deploy |
Chapter 8. Red Hat Directory Server 11.3 | Chapter 8. Red Hat Directory Server 11.3 8.1. Highlighted updates and new features This section documents new features and important updates in Directory Server 11.3. Directory Server rebased to version 1.4.3.16 The 389-ds-base packages have been upgraded to upstream version 1.4.3.16, which provides a number of bug fixes and enhancements over the version. For a complete list of notable changes, read the upstream release notes before updating: https://www.port389.org/docs/389ds/releases/release-1-4-3-16.html https://www.port389.org/docs/389ds/releases/release-1-4-3-15.html https://www.port389.org/docs/389ds/releases/release-1-4-3-14.html https://www.port389.org/docs/389ds/releases/release-1-4-3-13.html https://www.port389.org/docs/389ds/releases/release-1-4-3-12.html https://www.port389.org/docs/389ds/releases/release-1-4-3-11.html https://www.port389.org/docs/389ds/releases/release-1-4-3-10.html https://www.port389.org/docs/389ds/releases/release-1-4-3-9.html Highlighted updates and new features in the 389-ds-base packages Features in Red Hat Directory Server, that are included in the 389-ds-base packages, are documented in the Red Hat Enterprise Linux 8.4 Release Notes: Directory Server can now reject internal unindexed searches Directory Server supports setting replication agreement bootstrap credentials The dsidm utility supports renaming and moving entries Directory Server now logs the work and operation time in RESULT entries The default value of nsslapd-nagle has been turned off to increase the throughput 8.2. Bug fixes This section describes bugs fixed in Directory Server 11.3 that have a significant impact on users. The lib389 library no longer fails to delete entries discovered by the Account object Previously, the _protected flag of the Account object in the lib389 Directory Server library was enabled. As a consequence, delete operations failed. This update sets the flag to False . As a result, the library no longer fails if you delete or rename entries discovered by the Account object. Bug fixes in the 389-ds-base packages Bug fixes in Red Hat Directory Server, that are included in the 389-ds-base packages, are documented in the Red Hat Enterprise Linux 8.4 Release Notes: Creating replication agreements with certificate-based authentication now works as expected 8.3. Known issues This section documents known problems and, if applicable, workarounds in Directory Server 11.3. Directory Server settings that are changed outside the web console's window are not automatically visible Because of the design of the Directory Server module in the Red Hat Enterprise Linux 8 web console, the web console does not automatically display the latest settings if a user changes the configuration outside of the console's window. For example, if you change the configuration using the command line while the web console is open, the new settings are not automatically updated in the web console. This applies also if you change the configuration using the web console on a different computer. To work around the problem, manually refresh the web console in the browser if the configuration has been changed outside the console's window. The Directory Server Web Console does not provide an LDAP browser The web console enables administrators to manage and configure Directory Server 11 instances. However, it does not provide an integrated LDAP browser. To manage users and groups in Directory Server, use the dsidm utility. To display and modify directory entries, use a third-party LDAP browser or the OpenLDAP client utilities provided by the openldap-clients package. | null | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/release_notes/directory-server-11.3 |
Chapter 1. Load Balancer Overview | Chapter 1. Load Balancer Overview The Load Balancer is a set of integrated software components that provide for balancing IP traffic across a set of real servers. It consists of two main technologies to monitor cluster members and cluster services: Keepalived and HAProxy. Keepalived uses Linux virtual server ( LVS ) to perform load balancing and failover tasks on the active and passive routers, while HAProxy performs load balancing and high-availability services to TCP and HTTP applications. 1.1. keepalived The keepalived daemon runs on both the active and passive LVS routers. All routers running keepalived use the Virtual Redundancy Routing Protocol (VRRP). The active router sends VRRP advertisements at periodic intervals; if the backup routers fail to receive these advertisements, a new active router is elected. On the active router, keepalived can also perform load balancing tasks for real servers. Keepalived is the controlling process related to LVS routers. At boot time, the daemon is started by the systemctl command, which reads the configuration file /etc/keepalived/keepalived.conf . On the active router, the keepalived daemon starts the LVS service and monitors the health of the services based on the configured topology. Using VRRP, the active router sends periodic advertisements to the backup routers. On the backup routers, the VRRP instance determines the running status of the active router. If the active router fails to advertise after a user-configurable interval, Keepalived initiates failover. During failover, the virtual servers are cleared. The new active router takes control of the virtual IP address ( VIP ), sends out an ARP message, sets up IPVS table entries (virtual servers), begins health checks, and starts sending VRRP advertisements. Keepalived performs failover on layer 4, or the Transport layer, upon which TCP conducts connection-based data transmissions. When a real server fails to reply to simple timeout TCP connection, keepalived detects that the server has failed and removes it from the server pool. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/load_balancer_administration/ch-lvs-overview-VSA |
Chapter 3. Reference design specifications | Chapter 3. Reference design specifications 3.1. Telco core and RAN DU reference design specifications The telco core reference design specification (RDS) describes OpenShift Container Platform 4.14 clusters running on commodity hardware that can support large scale telco applications including control plane and some centralized data plane functions. The telco RAN RDS describes the configuration for clusters running on commodity hardware to host 5G workloads in the Radio Access Network (RAN). 3.1.1. Reference design specifications for telco 5G deployments Red Hat and certified partners offer deep technical expertise and support for networking and operational capabilities required to run telco applications on OpenShift Container Platform 4.14 clusters. Red Hat's telco partners require a well-integrated, well-tested, and stable environment that can be replicated at scale for enterprise 5G solutions. The telco core and RAN DU reference design specifications (RDS) outline the recommended solution architecture based on a specific version of OpenShift Container Platform. Each RDS describes a tested and validated platform configuration for telco core and RAN DU use models. The RDS ensures an optimal experience when running your applications by defining the set of critical KPIs for telco 5G core and RAN DU. Following the RDS minimizes high severity escalations and improves application stability. 5G use cases are evolving and your workloads are continually changing. Red Hat is committed to iterating over the telco core and RAN DU RDS to support evolving requirements based on customer and partner feedback. 3.1.2. Reference design scope The telco core and telco RAN reference design specifications (RDS) capture the recommended, tested, and supported configurations to get reliable and repeatable performance for clusters running the telco core and telco RAN profiles. Each RDS includes the released features and supported configurations that are engineered and validated for clusters to run the individual profiles. The configurations provide a baseline OpenShift Container Platform installation that meets feature and KPI targets. Each RDS also describes expected variations for each individual configuration. Validation of each RDS includes many long duration and at-scale tests. Note The validated reference configurations are updated for each major Y-stream release of OpenShift Container Platform. Z-stream patch releases are periodically re-tested against the reference configurations. 3.1.3. Deviations from the reference design Deviating from the validated telco core and telco RAN DU reference design specifications (RDS) can have significant impact beyond the specific component or feature that you change. Deviations require analysis and engineering in the context of the complete solution. Important All deviations from the RDS should be analyzed and documented with clear action tracking information. Due diligence is expected from partners to understand how to bring deviations into line with the reference design. This might require partners to provide additional resources to engage with Red Hat to work towards enabling their use case to achieve a best in class outcome with the platform. This is critical for the supportability of the solution and ensuring alignment across Red Hat and with partners. Deviation from the RDS can have some or all of the following consequences: It can take longer to resolve issues. There is a risk of missing project service-level agreements (SLAs), project deadlines, end provider performance requirements, and so on. Unapproved deviations may require escalation at executive levels. Note Red Hat prioritizes the servicing of requests for deviations based on partner engagement priorities. 3.2. Telco RAN DU reference design specification 3.2.1. Telco RAN DU 4.14 reference design overview The Telco RAN distributed unit (DU) 4.14 reference design configures an OpenShift Container Platform 4.14 cluster running on commodity hardware to host telco RAN DU workloads. It captures the recommended, tested, and supported configurations to get reliable and repeatable performance for a cluster running the telco RAN DU profile. 3.2.1.1. OpenShift Container Platform 4.14 features for telco RAN DU The following features that are included in OpenShift Container Platform 4.14 and are leveraged by the telco RAN DU reference design specification (RDS) have been added or updated. Table 3.1. OpenShift Container Platform 4.14 features for the telco RAN DU RDS Feature Description GitOps ZTP independence from managed cluster version You can now use GitOps ZTP to manage clusters that are running different versions of OpenShift Container Platform compared to the version that is running on the hub cluster. You can also have a mix of OpenShift Container Platform versions in the deployed fleet of clusters. Preparing the GitOps ZTP site configuration repository for version independence Using custom CRs alongside the reference CRs in GitOps ZTP You can now use custom CRs alongside the reference configuration CRs provided in the ztp-site-generate container. Adding custom content to the GitOps ZTP pipeline Using custom node labels in the SiteConfig CR with GitOps ZTP You can now use the nodeLabels field in the SiteConfig CR to create custom roles for nodes in managed clusters. Single-node OpenShift SiteConfig CR installation reference Intel Westport Channel e810 NIC as PTP Grandmaster clock (Technology Preview) You can use the Intel Westport Channel E810-XXVDA4T as a GNSS-sourced grandmaster clock. The NIC is automatically configured by the PTP Operator with the E810 hardware plugin. Configuring linuxptp services as a grandmaster clock for dual E810 Westport Channel NICs PTP Operator hardware specific functionality plugin (Technology Preview) A new E810 NIC hardware plugin is now available in the PTP Operator. You can use the E810 plugin to configure the NIC directly. Intel Westport Channel E810 hardware configuration reference PTP events and metrics The PtpConfig reference configuration CRs have been updated. Discovering PTP capable network devices in your cluster Precaching user-specified images You can now precache application workload images before upgrading your applications on single-node OpenShift clusters with Topology Aware Lifecycle Manager. Precaching images for single-node OpenShift deployments Using OpenShift capabilities to further reduce the single-node OpenShift DU footprint Use cluster capabilities to enable or disable optional components before you install the cluster. In OpenShift Container Platform 4.14, the following optional capabilities are available: image-registry , baremetal , marketplace , openshift-samples , Console , Insights , Storage , CSISnapshot , NodeTuning , MachineAPI . The reference configuration includes only those features required for RAN DU. Cluster capabilities Set vectord as the default log collector in the DU profile single-node OpenShift clusters that run DU workloads require logging and log forwarding. Cluster logging and log forwarding 3.2.1.2. Deployment architecture overview You deploy the telco RAN DU 4.14 reference configuration to managed clusters from a centrally managed RHACM hub cluster. The reference design specification (RDS) includes configuration of the managed clusters and the hub cluster components. Figure 3.1. Telco RAN DU deployment architecture overview 3.2.2. Telco RAN DU use model overview Use the following information to plan telco RAN DU workloads, cluster resources, and hardware specifications for the hub cluster and managed single-node OpenShift clusters. 3.2.2.1. Telco RAN DU application workloads DU worker nodes must have 3rd Generation Xeon (Ice Lake) 2.20 GHz or better CPUs with firmware tuned for maximum performance. 5G RAN DU user applications and workloads should conform to the following best practices and application limits: Develop cloud-native network functions (CNFs) that conform to the latest version of the CNF best practices guide . Use SR-IOV for high performance networking. Use exec probes sparingly and only when no other suitable options are available Do not use exec probes if a CNF uses CPU pinning. Use other probe implementations, for example, httpGet or tcpSocket . When you need to use exec probes, limit the exec probe frequency and quantity. The maximum number of exec probes must be kept below 10, and frequency must not be set to less than 10 seconds. Note Startup probes require minimal resources during steady-state operation. The limitation on exec probes applies primarily to liveness and readiness probes. 3.2.2.2. Telco RAN DU representative reference application workload characteristics The representative reference application workload has the following characteristics: Has a maximum of 15 pods and 30 containers for the vRAN application including its management and control functions Uses a maximum of 2 ConfigMap and 4 Secret CRs per pod Uses a maximum of 10 exec probes with a frequency of not less than 10 seconds Incremental application load on the kube-apiserver is less than 10% of the cluster platform usage Note You can extract CPU load can from the platform metrics. For example: query=avg_over_time(pod:container_cpu_usage:sum{namespace="openshift-kube-apiserver"}[30m]) Application logs are not collected by the platform log collector Aggregate traffic on the primary CNI is less than 1 MBps 3.2.2.3. Telco RAN DU worker node cluster resource utilization The maximum number of running pods in the system, inclusive of application workloads and OpenShift Container Platform pods, is 120. Resource utilization OpenShift Container Platform resource utilization varies depending on many factors including application workload characteristics such as: Pod count Type and frequency of probes Messaging rates on primary CNI or secondary CNI with kernel networking API access rate Logging rates Storage IOPS Cluster resource requirements are applicable under the following conditions: The cluster is running the described representative application workload. The cluster is managed with the constraints described in "Telco RAN DU worker node cluster resource utilization". Components noted as optional in the RAN DU use model configuration are not applied. Important You will need to do additional analysis to determine the impact on resource utilization and ability to meet KPI targets for configurations outside the scope of the Telco RAN DU reference design. You might have to allocate additional resources in the cluster depending on your requirements. Additional resources Telco RAN DU 4.14 validated software components 3.2.2.4. Hub cluster management characteristics Red Hat Advanced Cluster Management (RHACM) is the recommended cluster management solution. Configure it to the following limits on the hub cluster: Configure a maximum of 5 RHACM policies with a compliant evaluation interval of at least 10 minutes. Use a maximum of 10 managed cluster templates in policies. Where possible, use hub-side templating. Disable all RHACM add-ons except for the policy-controller and observability-controller add-ons. Set Observability to the default configuration. Important Configuring optional components or enabling additional features will result in additional resource usage and can reduce overall system performance. For more information, see Reference design deployment components . Table 3.2. OpenShift platform resource utilization under reference application load Metric Limit Notes CPU usage Less than 4000 mc - 2 cores (4 hyperthreads) Platform CPU is pinned to reserved cores, including both hyperthreads in each reserved core. The system is engineered to use 3 CPUs (3000mc) at steady-state to allow for periodic system tasks and spikes. Memory used Less than 16G 3.2.2.5. Telco RAN DU RDS components The following sections describe the various OpenShift Container Platform components and configurations that you use to configure and deploy clusters to run telco RAN DU workloads. Figure 3.2. Telco RAN DU reference design components Note Ensure that components that are not included in the telco RAN DU profile do not affect the CPU resources allocated to workload applications. Important Out of tree drivers are not supported. Additional resources For details of the telco RAN RDS KPI test results, see Telco RAN DU reference design specification KPI test results . This information is only available to customers and partners. 3.2.3. Telco RAN DU 4.14 reference design components The following sections describe the various OpenShift Container Platform components and configurations that you use to configure and deploy clusters to run RAN DU workloads. 3.2.3.1. Host firmware tuning New in this release No reference design updates in this release Description Configure system level performance. See Configuring host firmware for low latency and high performance for recommended settings. If Ironic inspection is enabled, the firmware setting values are available from the per-cluster BareMetalHost CR on the hub cluster. You enable Ironic inspection with a label in the spec.clusters.nodes field in the SiteConfig CR that you use to install the cluster. For example: nodes: - hostName: "example-node1.example.com" ironicInspect: "enabled" Note The telco RAN DU reference SiteConfig does not enable the ironicInspect field by default. Limits and requirements Hyperthreading must be enabled Engineering considerations Tune all settings for maximum performance Note You can tune firmware selections for power savings at the expense of performance as required. 3.2.3.2. Node Tuning Operator New in this release No reference design updates in this release Description You tune the cluster performance by creating a performance profile . Settings that you configure with a performance profile include: Selecting the realtime or non-realtime kernel. Allocating cores to a reserved or isolated cpuset . OpenShift Container Platform processes allocated to the management workload partition are pinned to reserved set. Enabling kubelet features (CPU manager, topology manager, and memory manager). Configuring huge pages. Setting additional kernel arguments. Setting per-core power tuning and max CPU frequency. Limits and requirements The Node Tuning Operator uses the PerformanceProfile CR to configure the cluster. You need to configure the following settings in the RAN DU profile PerformanceProfile CR: Select reserved and isolated cores and ensure that you allocate at least 4 hyperthreads (equivalent to 2 cores) on Intel 3rd Generation Xeon (Ice Lake) 2.20 GHz CPUs or better with firmware tuned for maximum performance. Set the reserved cpuset to include both hyperthread siblings for each included core. Unreserved cores are available as allocatable CPU for scheduling workloads. Ensure that hyperthread siblings are not split across reserved and isolated cores. Configure reserved and isolated CPUs to include all threads in all cores based on what you have set as reserved and isolated CPUs. Set core 0 of each NUMA node to be included in the reserved CPU set. Set the huge page size to 1G. Note You should not add additional workloads to the management partition. Only those pods which are part of the OpenShift management platform should be annotated into the management partition. Engineering considerations You should use the RT kernel to meet performance requirements. Note You can use the non-RT kernel if required. The number of huge pages that you configure depends on the application workload requirements. Variation in this parameter is expected and allowed. Variation is expected in the configuration of reserved and isolated CPU sets based on selected hardware and additional components in use on the system. Variation must still meet the specified limits. Hardware without IRQ affinity support impacts isolated CPUs. To ensure that pods with guaranteed whole CPU QoS have full use of the allocated CPU, all hardware in the server must support IRQ affinity. For more information, see About support of IRQ affinity setting . Note In OpenShift Container Platform 4.14, any PerformanceProfile CR configured on the cluster causes the Node Tuning Operator to automatically set all cluster nodes to use cgroup v1. For more information about cgroups, see Configuring Linux cgroup . 3.2.3.3. PTP Operator New in this release PTP grandmaster clock (T-GM) GPS timing with Intel E810-XXV-4T Westport Channel NIC - minimum firmware version 4.30 (Technology Preview) PTP events and metrics for grandmaster (T-GM) are new in OpenShift Container Platform 4.14 (Technology Preview) Description Configure of PTP timing support for cluster nodes. The DU node can run in the following modes: As an ordinary clock synced to a T-GM or boundary clock (T-BC) As dual boundary clocks, one per NIC (high availability is not supported) As grandmaster clock with support for E810 Westport Channel NICs (Technology Preview) Optionally as a boundary clock for radio units (RUs) Optional: subscribe applications to PTP events that happen on the node that the application is running. You subscribe the application to events via HTTP. Limits and requirements High availability is not supported with dual NIC configurations. Westport Channel NICs configured as T-GM do not support DPLL with the current ice driver version. GPS offsets are not reported. Use a default offset of less than or equal to 5. DPLL offsets are not reported. Use a default offset of less than or equal to 5. Engineering considerations Configurations are provided for ordinary clock, boundary clock, or grandmaster clock PTP fast event notifications uses ConfigMap CRs to store PTP event subscriptions Use Intel E810-XXV-4T Westport Channel NICs for PTP grandmaster clocks with GPS timing, minimum firmware version 4.40 3.2.3.4. SR-IOV Operator New in this release No reference design updates in this release Description The SR-IOV Operator provisions and configures the SR-IOV CNI and device plugins. Both netdevice (kernel VFs) and vfio (DPDK) devices are supported. Engineering considerations Customer variation on the configuration and number of SriovNetwork and SriovNetworkNodePolicy custom resources (CRs) is expected. IOMMU kernel command line settings are applied with a MachineConfig CR at install time. This ensures that the SriovOperator CR does not cause a reboot of the node when adding them. 3.2.3.5. Logging New in this release Vector is now the recommended log collector. Description Use logging to collect logs from the far edge node for remote analysis. Engineering considerations Handling logs beyond the infrastructure and audit logs, for example, from the application workload requires additional CPU and network bandwidth based on additional logging rate. As of OpenShift Container Platform 4.14, vector is the reference log collector. Note Use of fluentd in the RAN use model is deprecated. 3.2.3.6. SRIOV-FEC Operator New in this release No reference design updates in this release Description SRIOV-FEC Operator is an optional 3rd party Certified Operator supporting FEC accelerator hardware. Limits and requirements Starting with FEC Operator v2.7.0: SecureBoot is supported The vfio driver for the PF requires the usage of vfio-token that is injected into Pods. The VF token can be passed to DPDK by using the EAL parameter --vfio-vf-token . Engineering considerations The SRIOV-FEC Operator uses CPU cores from the isolated CPU set. You can validate FEC readiness as part of the pre-checks for application deployment, for example, by extending the validation policy. 3.2.3.7. Local Storage Operator New in this release No reference design updates in this release Description You can create persistent volumes that can be used as PVC resources by applications with the Local Storage Operator. The number and type of PV resources that you create depends on your requirements. Engineering considerations Create backing storage for PV CRs before creating the PV . This can be a partition, a local volume, LVM volume, or full disk. Refer to the device listing in LocalVolume CRs by the hardware path used to access each device to ensure correct allocation of disks and partitions. Logical names (for example, /dev/sda ) are not guaranteed to be consistent across node reboots. For more information, see the RHEL 9 documentation on device identifiers . 3.2.3.8. LVMS Operator New in this release No reference design updates in this release New in this release Simplified LVMS deviceSelector logic LVM Storage with ext4 and PV resources Note LVMS Operator is an optional component. Description The LVMS Operator provides dynamic provisioning of block and file storage. The LVMS Operator creates logical volumes from local devices that can be used as PVC resources by applications. Volume expansion and snapshots are also possible. The following example configuration creates a vg1 volume group that leverages all available disks on the node except the installation disk: StorageLVMCluster.yaml apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: storage-lvmcluster namespace: openshift-storage annotations: ran.openshift.io/ztp-deploy-wave: "10" spec: {} storage: deviceClasses: - name: vg1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10 Limits and requirements In single-node OpenShift clusters, persistent storage must be provided by either LVMS or Local Storage, not both. Engineering considerations The LVMS Operator is not the reference storage solution for the DU use case. If you require LVMS Operator for application workloads, the resource use is accounted for against the application cores. Ensure that sufficient disks or partitions are available for storage requirements. 3.2.3.9. Workload partitioning New in this release No reference design updates in this release Description Workload partitioning pins OpenShift platform and Day 2 Operator pods that are part of the DU profile to the reserved cpuset and removes the reserved CPU from node accounting. This leaves all unreserved CPU cores available for user workloads. The method of enabling and configuring workload partitioning changed in OpenShift Container Platform 4.14. 4.14 and later Configure partitions by setting installation parameters: cpuPartitioningMode: AllNodes Configure management partition cores with the reserved CPU set in the PerformanceProfile CR 4.13 and earlier Configure partitions with extra MachineConfiguration CRs applied at install-time Limits and requirements Namespace and Pod CRs must be annotated to allow the pod to be applied to the management partition Pods with CPU limits cannot be allocated to the partition. This is because mutation can change the pod QoS. For more information about the minimum number of CPUs that can be allocated to the management partition, see Node Tuning Operator . Engineering considerations Workload Partitioning pins all management pods to reserved cores. A sufficient number of cores must be allocated to the reserved set to account for operating system, management pods, and expected spikes in CPU use that occur when the workload starts, the node reboots, or other system events happen. 3.2.3.10. Cluster tuning New in this release You can remove the Image Registry Operator by using the cluster capabilities feature. Note You configure cluster capabilities by using the spec.clusters.installConfigOverrides field in the SiteConfig CR that you use to install the cluster. Description The cluster capabilities feature now includes a MachineAPI component which, when excluded, disables the following Operators and their resources in the cluster: openshift/cluster-autoscaler-operator openshift/cluster-control-plane-machine-set-operator openshift/machine-api-operator Limits and requirements Cluster capabilities are not available for installer-provisioned installation methods. You must apply all platform tuning configurations. The following table lists the required platform tuning configurations: Table 3.3. Cluster capabilities configurations Feature Description Remove optional cluster capabilities Reduce the OpenShift Container Platform footprint by disabling optional cluster Operators on single-node OpenShift clusters only. Remove all optional Operators except the Marketplace and Node Tuning Operators. Configure cluster monitoring Configure the monitoring stack for reduced footprint by doing the following: Disable the local alertmanager and telemeter components. If you use RHACM observability, the CR must be augmented with appropriate additionalAlertManagerConfigs CRs to forward alerts to the hub cluster. Reduce the Prometheus retention period to 24h. Note The RHACM hub cluster aggregates managed cluster metrics. Disable networking diagnostics Disable networking diagnostics for single-node OpenShift because they are not required. Configure a single Operator Hub catalog source Configure the cluster to use a single catalog source that contains only the Operators required for a RAN DU deployment. Each catalog source increases the CPU use on the cluster. Using a single CatalogSource fits within the platform CPU budget. 3.2.3.11. Machine configuration New in this release Set rcu_normal after node recovery Limits and requirements The CRI-O wipe disable MachineConfig assumes that images on disk are static other than during scheduled maintenance in defined maintenance windows. To ensure the images are static, do not set the pod imagePullPolicy field to Always . Table 3.4. Machine configuration options Feature Description Container runtime Sets the container runtime to crun for all node roles. kubelet config and container mount hiding Reduces the frequency of kubelet housekeeping and eviction monitoring to reduce CPU usage. Create a container mount namespace, visible to kubelet and CRI-O, to reduce system mount scanning resource usage. SCTP Optional configuration (enabled by default) Enables SCTP. SCTP is required by RAN applications but disabled by default in RHCOS. kdump Optional configuration (enabled by default) Enables kdump to capture debug information when a kernel panic occurs. CRI-O wipe disable Disables automatic wiping of the CRI-O image cache after unclean shutdown. SR-IOV-related kernel arguments Includes additional SR-IOV related arguments in the kernel command line. RCU Normal systemd service Sets rcu_normal after the system is fully started. One-shot time sync Runs a one-time system time synchronization job for control plane or worker nodes. 3.2.3.12. Reference design deployment components The following sections describe the various OpenShift Container Platform components and configurations that you use to configure the hub cluster with Red Hat Advanced Cluster Management (RHACM). 3.2.3.12.1. Red Hat Advanced Cluster Management (RHACM) New in this release Additional node labels can be configured during installation. Description RHACM provides Multi Cluster Engine (MCE) installation and ongoing lifecycle management functionality for deployed clusters. You declaratively specify configurations and upgrades with Policy CRs and apply the policies to clusters with the RHACM policy controller as managed by Topology Aware Lifecycle Manager. GitOps Zero Touch Provisioning (ZTP) uses the MCE feature of RHACM Configuration, upgrades, and cluster status are managed with the RHACM policy controller Limits and requirements A single hub cluster supports up to 3500 deployed single-node OpenShift clusters with 5 Policy CRs bound to each cluster. Engineering considerations Cluster specific configuration: managed clusters typically have some number of configuration values that are specific to the individual cluster. These configurations should be managed using RHACM policy hub-side templating with values pulled from ConfigMap CRs based on the cluster name. To save CPU resources on managed clusters, policies that apply static configurations should be unbound from managed clusters after GitOps ZTP installation of the cluster. For more information, see Release a persistent volume . 3.2.3.12.2. Topology Aware Lifecycle Manager (TALM) New in this release Added support for pre-caching additional user-specified images Description Managed updates TALM is an Operator that runs only on the hub cluster for managing how changes (including cluster and Operator upgrades, configuration, and so on) are rolled out to the network. TALM does the following: Progressively applies updates to fleets of clusters in user-configurable batches by using Policy CRs. Adds ztp-done labels or other user configurable labels on a per-cluster basis Precaching for single-node OpenShift clusters TALM supports optional precaching of OpenShift Container Platform, OLM Operator, and additional user images to single-node OpenShift clusters before initiating an upgrade. A new PreCachingConfig custom resource is available for specifying optional pre-caching configurations. For example: apiVersion: ran.openshift.io/v1alpha1 kind: PreCachingConfig metadata: name: example-config namespace: example-ns spec: additionalImages: - quay.io/foobar/application1@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e - quay.io/foobar/application2@sha256:3d5800123dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47adf - quay.io/foobar/applicationN@sha256:4fe1334adfafadsf987123adfffdaf1243340adfafdedga0991234afdadfs spaceRequired: 45 GiB 1 overrides: preCacheImage: quay.io/test_images/pre-cache:latest platformImage: quay.io/openshift-release-dev/ocp-release@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e operatorsIndexes: - registry.example.com:5000/custom-redhat-operators:1.0.0 operatorsPackagesAndChannels: - local-storage-operator: stable - ptp-operator: stable - sriov-network-operator: stable excludePrecachePatterns: 2 - aws - vsphere 1 1 Configurable space-required parameter allows you to validate before and after pre-caching storage space 2 Configurable filtering allows exclusion of unused images Backup and restore for single-node OpenShift TALM supports taking a snapshot of the cluster operating system and configuration to a dedicated partition on a local disk. A restore script is provided that returns the cluster to the backed up state. Limits and requirements TALM supports concurrent cluster deployment in batches of 400 Precaching and backup features are for single-node OpenShift clusters only. Engineering considerations The PreCachingConfig CR is optional and does not need to be created if you just wants to precache platform related (OpenShift and OLM Operator) images. The PreCachingConfig CR must be applied before referencing it in the ClusterGroupUpgrade CR. Create a recovery partition during installation if you opt to use the TALM backup and restore feature. 3.2.3.12.3. GitOps and GitOps ZTP plugins New in this release GA support for inclusion of user-provided CRs in Git for GitOps ZTP deployments GitOps ZTP independence from the deployed cluster version Description GitOps and GitOps ZTP plugins provide a GitOps-based infrastructure for managing cluster deployment and configuration. Cluster definitions and configurations are maintained as a declarative state in Git. ZTP plugins provide support for generating installation CRs from the SiteConfig CR and automatic wrapping of configuration CRs in policies based on PolicyGenTemplate CRs. You can deploy and manage multiple versions of OpenShift Container Platform on managed clusters with the baseline reference configuration CRs in a /source-crs subdirectory provided that subdirectory also contains the kustomization.yaml file. You add user-provided CRs to this subdirectory that you use with the predefined CRs that are specified in the PolicyGenTemplate CRs. This allows you to tailor your configurations to suit your specific requirements and provides GitOps ZTP version independence between managed clusters and the hub cluster. For more information, see the following: Preparing the site configuration repository for version independence Adding custom content to the GitOps ZTP pipeline Limits 300 SiteConfig CRs per ArgoCD application. You can use multiple applications to achieve the maximum number of clusters supported by a single hub cluster. Content in the /source-crs folder in Git overrides content provided in the GitOps ZTP plugin container. Git takes precedence in the search path. Add the /source-crs folder in the same directory as the kustomization.yaml file, which includes the PolicyGenTemplate as a generator. Note Alternative locations for the /source-crs directory are not supported in this context. Engineering considerations To avoid confusion or unintentional overwriting of files when updating content, use unique and distinguishable names for user-provided CRs in the /source-crs folder and extra manifests in Git. The SiteConfig CR allows multiple extra-manifest paths. When files with the same name are found in multiple directory paths, the last file found takes precedence. This allows the full set of version specific Day 0 manifests (extra-manifests) to be placed in Git and referenced from the SiteConfig . With this feature, you can deploy multiple OpenShift Container Platform versions to managed clusters simultaneously. The extraManifestPath field of the SiteConfig CR is deprecated from OpenShift Container Platform 4.15 and later. Use the new extraManifests.searchPaths field instead. 3.2.3.12.4. Agent-based installer New in this release No reference design updates in this release Description Agent-based installer (ABI) provides installation capabilities without centralized infrastructure. The installation program creates an ISO image that you mount to the server. When the server boots it installs OpenShift Container Platform and supplied extra manifests. Note You can also use ABI to install OpenShift Container Platform clusters without a hub cluster. An image registry is still required when you use ABI in this manner. Agent-based installer (ABI) is an optional component. Limits and requirements You can supply a limited set of additional manifests at installation time. You must include MachineConfiguration CRs that are required by the RAN DU use case. Engineering considerations ABI provides a baseline OpenShift Container Platform installation. You install Day 2 Operators and the remainder of the RAN DU use case configurations after installation. 3.2.3.13. Additional components 3.2.3.13.1. Bare Metal Event Relay The Bare Metal Event Relay is an optional Operator that runs exclusively on the managed spoke cluster. It relays Redfish hardware events to cluster applications. Note The Bare Metal Event Relay is not included in the RAN DU use model reference configuration and is an optional feature. If you want to use the Bare Metal Event Relay, assign additional CPU resources from the application CPU budget. 3.2.4. Telco RAN distributed unit (DU) reference configuration CRs Use the following custom resources (CRs) to configure and deploy OpenShift Container Platform clusters with the telco RAN DU profile. Some of the CRs are optional depending on your requirements. CR fields you can change are annotated in the CR with YAML comments. Note You can extract the complete set of RAN DU CRs from the ztp-site-generate container image. See Preparing the GitOps ZTP site configuration repository for more information. 3.2.4.1. Day 2 Operators reference CRs Table 3.5. Day 2 Operators CRs Component Reference CR Optional New in this release Cluster logging ClusterLogForwarder.yaml No No Cluster logging ClusterLogging.yaml No No Cluster logging ClusterLogNS.yaml No No Cluster logging ClusterLogOperGroup.yaml No No Cluster logging ClusterLogSubscription.yaml No No Local Storage Operator StorageClass.yaml Yes No Local Storage Operator StorageLV.yaml Yes No Local Storage Operator StorageNS.yaml Yes No Local Storage Operator StorageOperGroup.yaml Yes No Local Storage Operator StorageSubscription.yaml Yes No Node Tuning Operator PerformanceProfile.yaml No No Node Tuning Operator TunedPerformancePatch.yaml No No PTP fast event notifications PtpOperatorConfigForEvent.yaml Yes No PTP Operator PtpConfigBoundary.yaml No No PTP Operator PtpConfigGmWpc.yaml No Yes PTP Operator PtpConfigSlave.yaml No No PTP Operator PtpSubscription.yaml No No PTP Operator PtpSubscriptionNS.yaml No No PTP Operator PtpSubscriptionOperGroup.yaml No No SR-IOV FEC Operator AcceleratorsNS.yaml Yes No SR-IOV FEC Operator AcceleratorsOperGroup.yaml Yes No SR-IOV FEC Operator AcceleratorsSubscription.yaml Yes No SR-IOV FEC Operator SriovFecClusterConfig.yaml Yes No SR-IOV Operator SriovNetwork.yaml No No SR-IOV Operator SriovNetworkNodePolicy.yaml No No SR-IOV Operator SriovOperatorConfig.yaml No No SR-IOV Operator SriovSubscription.yaml No No SR-IOV Operator SriovSubscriptionNS.yaml No No SR-IOV Operator SriovSubscriptionOperGroup.yaml No No 3.2.4.2. Cluster tuning reference CRs Table 3.6. Cluster tuning CRs Component Reference CR Optional New in this release Cluster capabilities example-sno.yaml No No Disabling network diagnostics DisableSnoNetworkDiag.yaml No No Monitoring configuration ReduceMonitoringFootprint.yaml No No OperatorHub DefaultCatsrc.yaml No No OperatorHub DisconnectedICSP.yaml No No OperatorHub OperatorHub.yaml No No 3.2.4.3. Machine configuration reference CRs Table 3.7. Machine configuration CRs Component Reference CR Optional New in this release Container runtime (crun) enable-crun-master.yaml No No Container runtime (crun) enable-crun-worker.yaml No No Disabling CRI-O wipe 99-crio-disable-wipe-master.yaml No No Disabling CRI-O wipe 99-crio-disable-wipe-worker.yaml No No Enabling kdump 05-kdump-config-master.yaml No Yes Enabling kdump 05-kdump-config-worker.yaml No Yes Enabling kdump 06-kdump-master.yaml No No Enabling kdump 06-kdump-worker.yaml No No Kubelet configuration and container mount hiding 01-container-mount-ns-and-kubelet-conf-master.yaml No No Kubelet configuration and container mount hiding 01-container-mount-ns-and-kubelet-conf-worker.yaml No No One-shot time sync 99-sync-time-once-master.yaml No Yes One-shot time sync 99-sync-time-once-worker.yaml No Yes SCTP 03-sctp-machine-config-master.yaml No No SCTP 03-sctp-machine-config-worker.yaml No No SR-IOV related kernel arguments 07-sriov-related-kernel-args-master.yaml No Yes 3.2.4.4. YAML reference The following is a complete reference for all the custom resources (CRs) that make up the telco RAN DU 4.14 reference configuration. 3.2.4.4.1. Day 2 Operators reference YAML ClusterLogForwarder.yaml apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging annotations: {} spec: outputs: USDoutputs pipelines: USDpipelines ClusterLogging.yaml apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging annotations: {} spec: managementState: "Managed" collection: logs: type: "vector" ClusterLogNS.yaml --- apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: workload.openshift.io/allowed: management ClusterLogOperGroup.yaml --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging annotations: {} spec: targetNamespaces: - openshift-logging ClusterLogSubscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging annotations: {} spec: channel: "stable" name: cluster-logging source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown StorageClass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: {} name: example-storage-class provisioner: kubernetes.io/no-provisioner reclaimPolicy: Delete StorageLV.yaml apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" annotations: {} spec: logLevel: Normal managementState: Managed storageClassDevices: # The list of storage classes and associated devicePaths need to be specified like this example: - storageClassName: "example-storage-class" volumeMode: Filesystem fsType: xfs # The below must be adjusted to the hardware. # For stability and reliability, it's recommended to use persistent # naming conventions for devicePaths, such as /dev/disk/by-path. devicePaths: - /dev/disk/by-path/pci-0000:05:00.0-nvme-1 #--- ## How to verify ## 1. Create a PVC # apiVersion: v1 # kind: PersistentVolumeClaim # metadata: # name: local-pvc-name # spec: # accessModes: # - ReadWriteOnce # volumeMode: Filesystem # resources: # requests: # storage: 100Gi # storageClassName: example-storage-class #--- ## 2. Create a pod that mounts it # apiVersion: v1 # kind: Pod # metadata: # labels: # run: busybox # name: busybox # spec: # containers: # - image: quay.io/quay/busybox:latest # name: busybox # resources: {} # command: ["/bin/sh", "-c", "sleep infinity"] # volumeMounts: # - name: local-pvc # mountPath: /data # volumes: # - name: local-pvc # persistentVolumeClaim: # claimName: local-pvc-name # dnsPolicy: ClusterFirst # restartPolicy: Always ## 3. Run the pod on the cluster and verify the size and access of the `/data` mount StorageNS.yaml apiVersion: v1 kind: Namespace metadata: name: openshift-local-storage annotations: workload.openshift.io/allowed: management StorageOperGroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-local-storage namespace: openshift-local-storage annotations: {} spec: targetNamespaces: - openshift-local-storage StorageSubscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage annotations: {} spec: channel: "stable" name: local-storage-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown PerformanceProfile.yaml apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: "ran-du.redhat.com" spec: additionalKernelArgs: - "rcupdate.rcu_normal_after_boot=0" - "efi=runtime" - "vfio_pci.enable_sriov=1" - "vfio_pci.disable_idle_d3=1" - "module_blacklist=irdma" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: "" nodeSelector: node-role.kubernetes.io/USDmcp: "" numa: topologyPolicy: "restricted" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false TunedPerformancePatch.yaml apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: performance-patch namespace: openshift-cluster-node-tuning-operator annotations: {} spec: profile: - name: performance-patch # Please note: # - The 'include' line must match the associated PerformanceProfile name, following below pattern # include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # - When using the standard (non-realtime) kernel, remove the kernel.timer_migration override from # the [sysctl] section and remove the entire section if it is empty. data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* group.ice-gnss=0:f:10:*:ice-gnss.* [service] service.stalld=start,enable service.chronyd=stop,disable recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: "USDmcp" priority: 19 profile: performance-patch PtpOperatorConfigForEvent.yaml apiVersion: ptp.openshift.io/v1 kind: PtpOperatorConfig metadata: name: default namespace: openshift-ptp annotations: {} spec: daemonNodeSelector: node-role.kubernetes.io/USDmcp: "" ptpEventConfig: enableEventPublisher: true transportHost: "http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043" PtpConfigBoundary.yaml apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary namespace: openshift-ptp annotations: {} spec: profile: - name: "boundary" ptp4lOpts: "-2" phc2sysOpts: "-a -r -n 24" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" ptp4lConf: | # The interface name is hardware-specific [USDiface_slave] masterOnly 0 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 135 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: "boundary" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" PtpConfigGmWpc.yaml # The grandmaster profile is provided for testing only # It is not installed on production clusters apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: "grandmaster" ptp4lOpts: "-2 --summary_interval -4" phc2sysOpts: -r -u 0 -m -O -37 -N 8 -R 16 -s USDiface_master -n 24 ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" plugins: e810: enableDefaultConfig: false settings: LocalMaxHoldoverOffSet: 1500 LocalHoldoverTimeout: 14400 MaxInSpecOffset: 100 pins: USDe810_pins # "USDiface_master": # "U.FL2": "0 2" # "U.FL1": "0 1" # "SMA2": "0 2" # "SMA1": "0 1" ublxCmds: - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 - "-P" - "29.20" - "-z" - "CFG-HW-ANT_CFG_VOLTCTRL,1" reportOutput: false - args: #ubxtool -P 29.20 -e GPS - "-P" - "29.20" - "-e" - "GPS" reportOutput: false - args: #ubxtool -P 29.20 -d Galileo - "-P" - "29.20" - "-d" - "Galileo" reportOutput: false - args: #ubxtool -P 29.20 -d GLONASS - "-P" - "29.20" - "-d" - "GLONASS" reportOutput: false - args: #ubxtool -P 29.20 -d BeiDou - "-P" - "29.20" - "-d" - "BeiDou" reportOutput: false - args: #ubxtool -P 29.20 -d SBAS - "-P" - "29.20" - "-d" - "SBAS" reportOutput: false - args: #ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000 - "-P" - "29.20" - "-t" - "-w" - "5" - "-v" - "1" - "-e" - "SURVEYIN,600,50000" reportOutput: true - args: #ubxtool -P 29.20 -p MON-HW - "-P" - "29.20" - "-p" - "MON-HW" reportOutput: true - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,300 - "-P" - "29.20" - "-p" - "CFG-MSG,1,38,300" reportOutput: true ts2phcOpts: " " ts2phcConf: | [nmea] ts2phc.master 1 [global] use_syslog 0 verbose 1 logging_level 7 ts2phc.pulsewidth 100000000 #GNSS module s /dev/ttyGNSS* -al use _0 #cat /dev/ttyGNSS_1700_0 to find available serial port #example value of gnss_serialport is /dev/ttyGNSS_1700_0 ts2phc.nmea_serialport USDgnss_serialport leapfile /usr/share/zoneinfo/leap-seconds.list [USDiface_master] ts2phc.extts_polarity rising ts2phc.extts_correction 0 ptp4lConf: | [USDiface_master] masterOnly 1 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 6 clockAccuracy 0x27 offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval 0 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0x20 recommend: - profile: "grandmaster" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" PtpConfigSlave.yaml apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ordinary namespace: openshift-ptp annotations: {} spec: profile: - name: "ordinary" # The interface name is hardware-specific interface: USDinterface ptp4lOpts: "-2 -s" phc2sysOpts: "-a -r -n 24" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: "ordinary" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" PtpSubscription.yaml --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp annotations: {} spec: channel: "stable" name: ptp-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown PtpSubscriptionNS.yaml --- apiVersion: v1 kind: Namespace metadata: name: openshift-ptp annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: "true" PtpSubscriptionOperGroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp annotations: {} spec: targetNamespaces: - openshift-ptp AcceleratorsNS.yaml apiVersion: v1 kind: Namespace metadata: name: vran-acceleration-operators annotations: {} AcceleratorsOperGroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: vran-operators namespace: vran-acceleration-operators annotations: {} spec: targetNamespaces: - vran-acceleration-operators AcceleratorsSubscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-fec-subscription namespace: vran-acceleration-operators annotations: {} spec: channel: stable name: sriov-fec source: certified-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown SriovFecClusterConfig.yaml apiVersion: sriovfec.intel.com/v2 kind: SriovFecClusterConfig metadata: name: config namespace: vran-acceleration-operators annotations: {} spec: drainSkip: USDdrainSkip # true if SNO, false by default priority: 1 nodeSelector: node-role.kubernetes.io/master: "" acceleratorSelector: pciAddress: USDpciAddress physicalFunction: pfDriver: "vfio-pci" vfDriver: "vfio-pci" vfAmount: 16 bbDevConfig: USDbbDevConfig #Recommended configuration for Intel ACC100 (Mount Bryce) FPGA here: https://github.com/smart-edge-open/openshift-operator/blob/main/spec/openshift-sriov-fec-operator.md#sample-cr-for-wireless-fec-acc100 #Recommended configuration for Intel N3000 FPGA here: https://github.com/smart-edge-open/openshift-operator/blob/main/spec/openshift-sriov-fec-operator.md#sample-cr-for-wireless-fec-n3000 SriovNetwork.yaml apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: "" namespace: openshift-sriov-network-operator annotations: {} spec: # resourceName: "" networkNamespace: openshift-sriov-network-operator # vlan: "" # spoofChk: "" # ipam: "" # linkState: "" # maxTxRate: "" # minTxRate: "" # vlanQoS: "" # trust: "" # capabilities: "" SriovNetworkNodePolicy.yaml apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USDname namespace: openshift-sriov-network-operator annotations: {} spec: # The attributes for Mellanox/Intel based NICs as below. # deviceType: netdevice/vfio-pci # isRdma: true/false deviceType: USDdeviceType isRdma: USDisRdma nicSelector: # The exact physical function name must match the hardware used pfNames: [USDpfNames] nodeSelector: node-role.kubernetes.io/USDmcp: "" numVfs: USDnumVfs priority: USDpriority resourceName: USDresourceName SriovOperatorConfig.yaml apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator annotations: {} spec: configDaemonNodeSelector: "node-role.kubernetes.io/USDmcp": "" # Injector and OperatorWebhook pods can be disabled (set to "false") below # to reduce the number of management pods. It is recommended to start with the # webhook and injector pods enabled, and only disable them after verifying the # correctness of user manifests. # If the injector is disabled, containers using sr-iov resources must explicitly assign # them in the "requests"/"limits" section of the container spec, for example: # containers: # - name: my-sriov-workload-container # resources: # limits: # openshift.io/<resource_name>: "1" # requests: # openshift.io/<resource_name>: "1" enableInjector: true enableOperatorWebhook: true logLevel: 0 SriovSubscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator annotations: {} spec: channel: "stable" name: sriov-network-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown SriovSubscriptionNS.yaml apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management SriovSubscriptionOperGroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator annotations: {} spec: targetNamespaces: - openshift-sriov-network-operator 3.2.4.4.2. Cluster tuning reference YAML example-sno.yaml # example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "example-sno" namespace: "example-sno" spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "openshift-4.10" sshPublicKey: "ssh-rsa AAAA..." clusters: - clusterName: "example-sno" networkType: "OVNKubernetes" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all but the marketplace component from the optional set of # components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier installConfigOverrides: | { "capabilities": { "baselineCapabilitySet": "None", "additionalEnabledCapabilities": [ "NodeTuning", "OperatorLifecycleManager" ] } } # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+. # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest. # extraManifestPath: sno-extra-manifest clusterLabels: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: "latest" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: true # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: ""' group-du-sno: "" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: "example-sno"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites : "example-sno" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster: #crTemplates: # KlusterletAddonConfig: "KlusterletAddonConfigOverride.yaml" nodes: - hostName: "example-node1.example.com" role: "master" # Optionally; This can be used to configure desired BIOS setting on a host: #biosConfigRef: # filePath: "example-hw.profile" bmcAddress: "idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1" bmcCredentialsName: name: "example-node1-bmh-secret" bootMACAddress: "AA:BB:CC:DD:EE:11" # Use UEFISecureBoot to enable secure boot bootMode: "UEFI" rootDeviceHints: deviceName: "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0" # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details ignitionConfigOverride: | { "ignition": { "version": "3.2.0" }, "storage": { "disks": [ { "device": "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0", "partitions": [ { "label": "var-lib-containers", "sizeMiB": 0, "startMiB": 250000 } ], "wipeTable": false } ], "filesystems": [ { "device": "/dev/disk/by-partlabel/var-lib-containers", "format": "xfs", "mountOptions": [ "defaults", "prjquota" ], "path": "/var/lib/containers", "wipeFilesystem": true } ] }, "systemd": { "units": [ { "contents": "# Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target", "enabled": true, "name": "var-lib-containers.mount" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: "AA:BB:CC:DD:EE:11" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 -hop-interface: eno1 -hop-address: 1111:2222:3333:4444::1 table-id: 254 DisableSnoNetworkDiag.yaml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster annotations: {} spec: disableNetworkDiagnostics: true ReduceMonitoringFootprint.yaml apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring annotations: {} data: config.yaml: | grafana: enabled: false alertmanagerMain: enabled: false telemeterClient: enabled: false prometheusK8s: retention: 24h DefaultCatsrc.yaml apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: default-cat-source namespace: openshift-marketplace annotations: target.workload.openshift.io/management: '{"effect": "PreferredDuringScheduling"}' spec: displayName: default-cat-source image: USDimageUrl publisher: Red Hat sourceType: grpc updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY DisconnectedICSP.yaml apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: disconnected-internal-icsp annotations: {} spec: repositoryDigestMirrors: - USDmirrors OperatorHub.yaml apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster annotations: {} spec: disableAllDefaultSources: true 3.2.4.4.3. Machine configuration reference YAML enable-crun-master.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-master spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: "" containerRuntimeConfig: defaultRuntime: crun enable-crun-worker.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-worker spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" containerRuntimeConfig: defaultRuntime: crun 99-crio-disable-wipe-master.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-crio-disable-wipe-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml 99-crio-disable-wipe-worker.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-crio-disable-wipe-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml 05-kdump-config-master.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 05-kdump-config-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump-remove-ice-module.service contents: | [Unit] Description=Remove ice module when doing kdump Before=kdump.service [Service] Type=oneshot RemainAfterExit=true ExecStart=/usr/local/bin/kdump-remove-ice-module.sh [Install] WantedBy=multi-user.target storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaAoKIyBUaGlzIHNjcmlwdCByZW1vdmVzIHRoZSBpY2UgbW9kdWxlIGZyb20ga2R1bXAgdG8gcHJldmVudCBrZHVtcCBmYWlsdXJlcyBvbiBjZXJ0YWluIHNlcnZlcnMuCiMgVGhpcyBpcyBhIHRlbXBvcmFyeSB3b3JrYXJvdW5kIGZvciBSSEVMUExBTi0xMzgyMzYgYW5kIGNhbiBiZSByZW1vdmVkIHdoZW4gdGhhdCBpc3N1ZSBpcwojIGZpeGVkLgoKc2V0IC14CgpTRUQ9Ii91c3IvYmluL3NlZCIKR1JFUD0iL3Vzci9iaW4vZ3JlcCIKCiMgb3ZlcnJpZGUgZm9yIHRlc3RpbmcgcHVycG9zZXMKS0RVTVBfQ09ORj0iJHsxOi0vZXRjL3N5c2NvbmZpZy9rZHVtcH0iClJFTU9WRV9JQ0VfU1RSPSJtb2R1bGVfYmxhY2tsaXN0PWljZSIKCiMgZXhpdCBpZiBmaWxlIGRvZXNuJ3QgZXhpc3QKWyAhIC1mICR7S0RVTVBfQ09ORn0gXSAmJiBleGl0IDAKCiMgZXhpdCBpZiBmaWxlIGFscmVhZHkgdXBkYXRlZAoke0dSRVB9IC1GcSAke1JFTU9WRV9JQ0VfU1RSfSAke0tEVU1QX0NPTkZ9ICYmIGV4aXQgMAoKIyBUYXJnZXQgbGluZSBsb29rcyBzb21ldGhpbmcgbGlrZSB0aGlzOgojIEtEVU1QX0NPTU1BTkRMSU5FX0FQUEVORD0iaXJxcG9sbCBucl9jcHVzPTEgLi4uIGhlc3RfZGlzYWJsZSIKIyBVc2Ugc2VkIHRvIG1hdGNoIGV2ZXJ5dGhpbmcgYmV0d2VlbiB0aGUgcXVvdGVzIGFuZCBhcHBlbmQgdGhlIFJFTU9WRV9JQ0VfU1RSIHRvIGl0CiR7U0VEfSAtaSAncy9eS0RVTVBfQ09NTUFORExJTkVfQVBQRU5EPSJbXiJdKi8mICcke1JFTU9WRV9JQ0VfU1RSfScvJyAke0tEVU1QX0NPTkZ9IHx8IGV4aXQgMAo= mode: 448 path: /usr/local/bin/kdump-remove-ice-module.sh 05-kdump-config-worker.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-kdump-config-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump-remove-ice-module.service contents: | [Unit] Description=Remove ice module when doing kdump Before=kdump.service [Service] Type=oneshot RemainAfterExit=true ExecStart=/usr/local/bin/kdump-remove-ice-module.sh [Install] WantedBy=multi-user.target storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaAoKIyBUaGlzIHNjcmlwdCByZW1vdmVzIHRoZSBpY2UgbW9kdWxlIGZyb20ga2R1bXAgdG8gcHJldmVudCBrZHVtcCBmYWlsdXJlcyBvbiBjZXJ0YWluIHNlcnZlcnMuCiMgVGhpcyBpcyBhIHRlbXBvcmFyeSB3b3JrYXJvdW5kIGZvciBSSEVMUExBTi0xMzgyMzYgYW5kIGNhbiBiZSByZW1vdmVkIHdoZW4gdGhhdCBpc3N1ZSBpcwojIGZpeGVkLgoKc2V0IC14CgpTRUQ9Ii91c3IvYmluL3NlZCIKR1JFUD0iL3Vzci9iaW4vZ3JlcCIKCiMgb3ZlcnJpZGUgZm9yIHRlc3RpbmcgcHVycG9zZXMKS0RVTVBfQ09ORj0iJHsxOi0vZXRjL3N5c2NvbmZpZy9rZHVtcH0iClJFTU9WRV9JQ0VfU1RSPSJtb2R1bGVfYmxhY2tsaXN0PWljZSIKCiMgZXhpdCBpZiBmaWxlIGRvZXNuJ3QgZXhpc3QKWyAhIC1mICR7S0RVTVBfQ09ORn0gXSAmJiBleGl0IDAKCiMgZXhpdCBpZiBmaWxlIGFscmVhZHkgdXBkYXRlZAoke0dSRVB9IC1GcSAke1JFTU9WRV9JQ0VfU1RSfSAke0tEVU1QX0NPTkZ9ICYmIGV4aXQgMAoKIyBUYXJnZXQgbGluZSBsb29rcyBzb21ldGhpbmcgbGlrZSB0aGlzOgojIEtEVU1QX0NPTU1BTkRMSU5FX0FQUEVORD0iaXJxcG9sbCBucl9jcHVzPTEgLi4uIGhlc3RfZGlzYWJsZSIKIyBVc2Ugc2VkIHRvIG1hdGNoIGV2ZXJ5dGhpbmcgYmV0d2VlbiB0aGUgcXVvdGVzIGFuZCBhcHBlbmQgdGhlIFJFTU9WRV9JQ0VfU1RSIHRvIGl0CiR7U0VEfSAtaSAncy9eS0RVTVBfQ09NTUFORExJTkVfQVBQRU5EPSJbXiJdKi8mICcke1JFTU9WRV9JQ0VfU1RSfScvJyAke0tEVU1QX0NPTkZ9IHx8IGV4aXQgMAo= mode: 448 path: /usr/local/bin/kdump-remove-ice-module.sh 06-kdump-master.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 06-kdump-enable-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M 06-kdump-worker.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 06-kdump-enable-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M 01-container-mount-ns-and-kubelet-conf-master.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: container-mount-namespace-and-kubelet-conf-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c "findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \ USD{ORIG_EXECSTART}" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \ USD{ORIG_EXECSTART} --housekeeping-interval=30s" name: 90-container-mount-namespace.conf - contents: | [Service] Environment="OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s" Environment="OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s" name: 30-kubelet-interval-tuning.conf name: kubelet.service 01-container-mount-ns-and-kubelet-conf-worker.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: container-mount-namespace-and-kubelet-conf-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c "findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \ USD{ORIG_EXECSTART}" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \ USD{ORIG_EXECSTART} --housekeeping-interval=30s" name: 90-container-mount-namespace.conf - contents: | [Service] Environment="OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s" Environment="OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s" name: 30-kubelet-interval-tuning.conf name: kubelet.service 99-sync-time-once-master.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-sync-time-once-master spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network.service [Service] Type=oneshot TimeoutStartSec=300 ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service 99-sync-time-once-worker.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-sync-time-once-worker spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network.service [Service] Type=oneshot TimeoutStartSec=300 ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service 03-sctp-machine-config-master.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: load-sctp-module-master spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf 03-sctp-machine-config-worker.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: load-sctp-module-worker spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf 07-sriov-related-kernel-args-master.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 07-sriov-related-kernel-args-master spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on - iommu=pt 3.2.5. Telco RAN DU reference configuration software specifications The following information describes the telco RAN DU reference design specification (RDS) validated software versions. 3.2.5.1. Telco RAN DU 4.14 validated software components The Red Hat telco RAN DU 4.14 solution has been validated using the following Red Hat software products for OpenShift Container Platform managed clusters and hub clusters. Table 3.8. Telco RAN DU managed cluster validated software components Component Software version Managed cluster version 4.14 Cluster Logging Operator 5.7 Local Storage Operator 4.14 PTP Operator 4.14 SRIOV Operator 4.14 Node Tuning Operator 4.14 Logging Operator 4.14 SRIOV-FEC Operator 2.7 Table 3.9. Hub cluster validated software components Component Software version Hub cluster version 4.14 GitOps ZTP plugin 4.14 Red Hat Advanced Cluster Management (RHACM) 2.9, 2.10 Red Hat OpenShift GitOps 1.9, 1.10 Topology Aware Lifecycle Manager (TALM) 4.14 3.3. Telco core reference design specification 3.3.1. Telco core 4.14 reference design overview The telco core reference design specification (RDS) configures a OpenShift Container Platform cluster running on commodity hardware to host telco core workloads. 3.3.1.1. OpenShift Container Platform 4.14 features for telco core The following features that are included in OpenShift Container Platform 4.14 and are leveraged by the telco core reference design specification (RDS) have been added or updated. Table 3.10. New features for telco core in OpenShift Container Platform 4.14 Feature Description Support for running rootless Data Plane Development Kit (DPDK) workloads with kernel access by using the TAP CNI plugin DPDK applications that inject traffic into the kernel can run in non-privileged pods with the help of the TAP CNI plugin. Using the TAP CNI to run a rootless DPDK workload with kernel access Dynamic use of non-reserved CPUs for OVS With this release, the Open vSwitch (OVS) networking stack can dynamically use non-reserved CPUs. The dynamic use of non-reserved CPUs occurs by default in performance-tuned clusters with a CPU manager policy set to static . The dynamic use of available, non-reserved CPUs maximizes compute resources for OVS and minimizes network latency for workloads during periods of high demand. OVS cannot use isolated CPUs assigned to containers in Guaranteed QoS pods. This separation avoids disruption to critical application workloads. Enabling more control over the C-states for each pod The PerformanceProfile supports perPodPowerManagement which provides more control over the C-states for pods. Now, instead of disabling C-states completely, you can specify a maximum latency in microseconds for C-states. You configure this option in the cpu-c-states.crio.io annotation, which helps to optimize power savings for high-priority applications by enabling some of the shallower C-states instead of disabling them completely. Optional: Power saving configurations Exclude SR-IOV network topology for NUMA-aware scheduling You can exclude advertising Non-Uniform Memory Access (NUMA) nodes for the SR-IOV network to the Topology Manager. By not advertising NUMA nodes for the SR-IOV network, you can permit more flexible SR-IOV network deployments during NUMA-aware pod scheduling. For example, in some scenarios, you want flexibility for how a pod is deployed. By not providing a NUMA node hint to the Topology Manager for the pod's SR-IOV network resource, the Topology Manager can deploy the SR-IOV network resource and the pod CPU and memory resources to different NUMA nodes. In OpenShift Container Platform releases, the Topology Manager attempted to place all resources on the same NUMA node. Exclude the SR-IOV network topology for NUMA-aware scheduling Egress service resource to manage egress traffic for pods behind a load balancer (Technology Preview) With this update, you can use an EgressService custom resource (CR) to manage egress traffic for pods behind a load balancer service. You can use the EgressService CR to manage egress traffic in the following ways: Assign the load balancer service's IP address as the source IP address of egress traffic for pods behind the load balancer service. Configure the egress traffic for pods behind a load balancer to a different network than the default node network. Configuring an egress service 3.3.2. Telco core 4.14 use model overview The Telco core reference design specification (RDS) describes a platform that supports large-scale telco applications including control plane functions such as signaling and aggregation. It also includes some centralized data plane functions, for example, user plane functions (UPF). These functions generally require scalability, complex networking support, resilient software-defined storage, and support performance requirements that are less stringent and constrained than far-edge deployments like RAN. Telco core use model architecture The networking prerequisites for telco core functions are diverse and encompass an array of networking attributes and performance benchmarks. IPv6 is mandatory, with dual-stack configurations being prevalent. Certain functions demand maximum throughput and transaction rates, necessitating user plane networking support such as DPDK. Other functions adhere to conventional cloud-native patterns and can use solutions such as OVN-K, kernel networking, and load balancing. Telco core clusters are configured as standard three control plane clusters with worker nodes configured with the stock non real-time (RT) kernel. To support workloads with varying networking and performance requirements, worker nodes are segmented using MachineConfigPool CRs. For example, this is done to separate non-user data plane nodes from high-throughput nodes. To support the required telco operational features, the clusters have a standard set of Operator Lifecycle Manager (OLM) Day 2 Operators installed. 3.3.2.1. Common baseline model The following configurations and use model description are applicable to all telco core use cases. Cluster The cluster conforms to these requirements: High-availability (3+ supervisor nodes) control plane Non-schedulable supervisor nodes Storage Core use cases require persistent storage as provided by external OpenShift Data Foundation. For more information, see the "Storage" subsection in "Reference core design components". Networking Telco core clusters networking conforms to these requirements: Dual stack IPv4/IPv6 Fully disconnected: Clusters do not have access to public networking at any point in their lifecycle. Multiple networks: Segmented networking provides isolation between OAM, signaling, and storage traffic. Cluster network type: OVN-Kubernetes is required for IPv6 support. Core clusters have multiple layers of networking supported by underlying RHCOS, SR-IOV Operator, Load Balancer, and other components detailed in the following "Networking" section. At a high level these layers include: Cluster networking: The cluster network configuration is defined and applied through the installation configuration. Updates to the configuration can be done at day-2 through the NMState Operator. Initial configuration can be used to establish: Host interface configuration A/A Bonding (Link Aggregation Control Protocol (LACP)) Secondary or additional networks: OpenShift CNI is configured through the Network additionalNetworks or NetworkAttachmentDefinition CRs. MACVLAN Application Workload: User plane networking is running in cloud-native network functions (CNFs). Service Mesh Use of Service Mesh by telco CNFs is very common. It is expected that all core clusters will include a Service Mesh implementation. Service Mesh implementation and configuration is outside the scope of this specification. 3.3.2.1.1. Engineering Considerations common use model The following engineering considerations are relevant for the common use model. Worker nodes Worker nodes run on Intel 3rd Generation Xeon (IceLake) processors or newer. Alternatively, if using Skylake or earlier processors, the mitigations for silicon security vulnerabilities such as Spectre must be disabled; failure to do so may result in a significant 40 percent decrease in transaction performance. IRQ Balancing is enabled on worker nodes. The PerformanceProfile sets globallyDisableIrqLoadBalancing: false . Guaranteed QoS Pods are annotated to ensure isolation as described in "CPU partitioning and performance tuning" subsection in "Reference core design components" section. All nodes Hyper-Threading is enabled on all nodes CPU architecture is x86_64 only Nodes are running the stock (non-RT) kernel Nodes are not configured for workload partitioning The balance of node configuration between power management and maximum performance varies between MachineConfigPools in the cluster. This configuration is consistent for all nodes within a MachineConfigPool . CPU partitioning CPU partitioning is configured using the PerformanceProfile and applied on a per MachineConfigPool basis. See the "CPU partitioning and performance tuning" subsection in "Reference core design components". 3.3.2.1.2. Application workloads Application workloads running on core clusters might include a mix of high-performance networking CNFs and traditional best-effort or burstable pod workloads. Guaranteed QoS scheduling is available to pods that require exclusive or dedicated use of CPUs due to performance or security requirements. Typically pods hosting high-performance and low-latency-sensitive Cloud Native Functions (CNFs) utilizing user plane networking with DPDK necessitate the exclusive utilization of entire CPUs. This is accomplished through node tuning and guaranteed Quality of Service (QoS) scheduling. For pods that require exclusive use of CPUs, be aware of the potential implications of hyperthreaded systems and configure them to request multiples of 2 CPUs when the entire core (2 hyperthreads) must be allocated to the pod. Pods running network functions that do not require the high throughput and low latency networking are typically scheduled with best-effort or burstable QoS and do not require dedicated or isolated CPU cores. Description of limits CNF applications should conform to the latest version of the Red Hat Best Practices for Kubernetes guide. For a mix of best-effort and burstable QoS pods. Guaranteed QoS pods might be used but require correct configuration of reserved and isolated CPUs in the PerformanceProfile . Guaranteed QoS Pods must include annotations for fully isolating CPUs. Best effort and burstable pods are not guaranteed exclusive use of a CPU. Workloads might be preempted by other workloads, operating system daemons, or kernel tasks. Exec probes should be avoided unless there is no viable alternative. Do not use exec probes if a CNF is using CPU pinning. Other probe implementations, for example httpGet/tcpSocket , should be used. Note Startup probes require minimal resources during steady-state operation. The limitation on exec probes applies primarily to liveness and readiness probes. Signaling workload Signaling workloads typically use SCTP, REST, gRPC, or similar TCP or UDP protocols. The transactions per second (TPS) is in the order of hundreds of thousands using secondary CNI (multus) configured as MACVLAN or SR-IOV. Signaling workloads run in pods with either guaranteed or burstable QoS. 3.3.3. Telco core reference design components The following sections describe the various OpenShift Container Platform components and configurations that you use to configure and deploy clusters to run telco core workloads. 3.3.3.1. CPU partitioning and performance tuning New in this release Open vSwitch (OVS) is removed from CPU partitioning. OVS manages its cpuset dynamically to automatically adapt to network traffic needs. Users no longer need to reserve additional CPUs for handling high network throughput on the primary container network interface (CNI). There is no impact on the configuration needed to benefit from this change. Description CPU partitioning allows for the separation of sensitive workloads from generic purposes, auxiliary processes, interrupts, and driver work queues to achieve improved performance and latency. The CPUs allocated to those auxiliary processes are referred to as reserved in the following sections. In hyperthreaded systems, a CPU is one hyperthread. For more information, see Restricting CPUs for infra and application containers . Configure system level performance. For recommended settings, see Configuring host firmware for low latency and high performance . Limits and requirements The operating system needs a certain amount of CPU to perform all the support tasks including kernel networking. A system with just user plane networking applications (DPDK) needs at least one Core (2 hyperthreads when enabled) reserved for the operating system and the infrastructure components. A system with Hyper-Threading enabled must always put all core sibling threads to the same pool of CPUs. The set of reserved and isolated cores must include all CPU cores. Core 0 of each NUMA node must be included in the reserved CPU set. Isolated cores might be impacted by interrupts. The following annotations must be attached to the pod if guaranteed QoS pods require full use of the CPU: When per-pod power management is enabled with PerformanceProfile.workloadHints.perPodPowerManagement the following annotations must also be attached to the pod if guaranteed QoS pods require full use of the CPU: Engineering considerations The minimum reserved capacity ( systemReserved ) required can be found by following the guidance in "Which amount of CPU and memory are recommended to reserve for the system in OCP 4 nodes?" The actual required reserved CPU capacity depends on the cluster configuration and workload attributes. This reserved CPU value must be rounded up to a full core (2 hyper-thread) alignment. Changes to the CPU partitioning will drain and reboot the nodes in the MCP. The reserved CPUs reduce the pod density, as the reserved CPUs are removed from the allocatable capacity of the OpenShift node. The real-time workload hint should be enabled if the workload is real-time capable. Hardware without Interrupt Request (IRQ) affinity support will impact isolated CPUs. To ensure that pods with guaranteed CPU QoS have full use of allocated CPU, all hardware in the server must support IRQ affinity. Additional resources Tuning nodes for low latency with the performance profile Configuring host firmware for low latency and high performance 3.3.3.2. Service Mesh Description Telco core CNFs typically require a service mesh implementation. The specific features and performance required are dependent on the application. The selection of service mesh implementation and configuration is outside the scope of this documentation. The impact of service mesh on cluster resource utilization and performance, including additional latency introduced into pod networking, must be accounted for in the overall solution engineering. Additional resources About OpenShift Service Mesh 3.3.3.3. Networking OpenShift Container Platform networking is an ecosystem of features, plugins, and advanced networking capabilities that extend Kubernetes networking with the advanced networking-related features that your cluster needs to manage its network traffic for one or multiple hybrid clusters. Additional resources Understanding networking 3.3.3.3.1. Cluster Network Operator (CNO) New in this release Not applicable. Description The CNO deploys and manages the cluster network components including the default OVN-Kubernetes network plugin during OpenShift Container Platform cluster installation. It allows configuring primary interface MTU settings, OVN gateway modes to use node routing tables for pod egress, and additional secondary networks such as MACVLAN. In support of network traffic segregation, multiple network interfaces are configured through the CNO. Traffic steering to these interfaces is configured through static routes applied by using the NMState Operator. To ensure that pod traffic is properly routed, OVN-K is configured with the routingViaHost option enabled. This setting uses the kernel routing table and the applied static routes rather than OVN for pod egress traffic. The Whereabouts CNI plugin is used to provide dynamic IPv4 and IPv6 addressing for additional pod network interfaces without the use of a DHCP server. Limits and requirements OVN-Kubernetes is required for IPv6 support. Large MTU cluster support requires connected network equipment to be set to the same or larger value. Engineering considerations Pod egress traffic is handled by kernel routing table with the routingViaHost option. Appropriate static routes must be configured in the host. Additional resources Cluster Network Operator 3.3.3.3.2. Load Balancer New in this release Not applicable. Description MetalLB is a load-balancer implementation for bare metal Kubernetes clusters using standard routing protocols. It enables a Kubernetes service to get an external IP address which is also added to the host network for the cluster. Some use cases might require features not available in MetalLB, for example stateful load balancing. Where necessary, you can use an external third party load balancer. Selection and configuration of an external load balancer is outside the scope of this specification. When an external third party load balancer is used, the integration effort must include enough analysis to ensure all performance and resource utilization requirements are met. Limits and requirements Stateful load balancing is not supported by MetalLB. An alternate load balancer implementation must be used if this is a requirement for workload CNFs. The networking infrastructure must ensure that the external IP address is routable from clients to the host network for the cluster. Engineering considerations MetalLB is used in BGP mode only for core use case models. For core use models, MetalLB is supported with only the OVN-Kubernetes network provider used in local gateway mode. See routingViaHost in the "Cluster Network Operator" section. BGP configuration in MetalLB varies depending on the requirements of the network and peers. Address pools can be configured as needed, allowing variation in addresses, aggregation length, auto assignment, and other relevant parameters. The values of parameters in the Bi-Directional Forwarding Detection (BFD) profile should remain close to the defaults. Shorter values might lead to false negatives and impact performance. Additional resources About MetalLB and the MetalLB Operator 3.3.3.3.3. SR-IOV New in this release Not applicable Description SR-IOV enables physical network interfaces (PFs) to be divided into multiple virtual functions (VFs). VFs can then be assigned to multiple pods to achieve higher throughput performance while keeping the pods isolated. The SR-IOV Network Operator provisions and manages SR-IOV CNI, network device plugin, and other components of the SR-IOV stack. Limits and requirements The network interface controllers supported are listed in OCP supported SR-IOV devices SR-IOV and IOMMU enablement in BIOS: The SR-IOV Network Operator automatically enables IOMMU on the kernel command line. SR-IOV VFs do not receive link state updates from PF. If link down detection is needed, it must be done at the protocol level. Engineering considerations SR-IOV interfaces in vfio mode are typically used to enable additional secondary networks for applications that require high throughput or low latency. Additional resources About Single Root I/O Virtualization (SR-IOV) hardware networks 3.3.3.3.4. NMState Operator New in this release Not applicable Description The NMState Operator provides a Kubernetes API for performing network configurations across the cluster's nodes. It enables network interface configurations, static IPs and DNS, VLANs, trunks, bonding, static routes, MTU, and enabling promiscuous mode on the secondary interfaces. The cluster nodes periodically report on the state of each node's network interfaces to the API server. Limits and requirements Not applicable Engineering considerations The initial networking configuration is applied using NMStateConfig content in the installation CRs. The NMState Operator is used only when needed for network updates. When SR-IOV virtual functions are used for host networking, the NMState Operator using NodeNetworkConfigurationPolicy is used to configure those VF interfaces, for example, VLANs and the MTU. Additional resources About the Kubernetes NMState Operator 3.3.3.4. Logging New in this release Not applicable Description The ClusterLogging Operator enables collection and shipping of logs off the node for remote archival and analysis. The reference configuration ships audit and infrastructure logs to a remote archive by using Kafka. Limits and requirements Not applicable Engineering considerations The impact of cluster CPU use is based on the number or size of logs generated and the amount of log filtering configured. The reference configuration does not include shipping of application logs. Inclusion of application logs in the configuration requires evaluation of the application logging rate and sufficient additional CPU resources allocated to the reserved set. Additional resources About logging 3.3.3.5. Power Management New in this release You can specify a maximum latency that is C-state for a low latency pod when using per-pod power management. Previously, C-states could only be disabled completely on a per pod basis. Description The Performance Profile can be used to configure a cluster in a high power, low power or mixed ( per-pod power management ) mode. The choice of power mode depends on the characteristics of the workloads running on the cluster particularly how sensitive they are to latency. Limits and requirements Power configuration relies on appropriate BIOS configuration, for example, enabling C-states and P-states. Configuration varies between hardware vendors. Engineering considerations Latency: To ensure that latency-sensitive workloads meet their requirements, you will need either a high-power configuration or a per-pod power management configuration. Per-pod power management is only available for Guaranteed QoS Pods with dedicated pinned CPUs. Additional resources Configuring power saving for nodes that run colocated high and low priority workloads 3.3.3.6. Storage Overview Cloud native storage services can be provided by multiple solutions including OpenShift Data Foundation from Red Hat or third parties. OpenShift Data Foundation is a Ceph based software-defined storage solution for containers. It provides block storage, file system storage, and on-premises object storage, which can be dynamically provisioned for both persistent and non-persistent data requirements. Telco core applications require persistent storage. Note All storage data may not be encrypted in flight. To reduce risk, isolate the storage network from other cluster networks. The storage network must not be reachable, or routable, from other cluster networks. Only nodes directly attached to the storage network should be allowed to gain access to it. 3.3.3.6.1. OpenShift Data Foundation New in this release Not applicable Description Red Hat OpenShift Data Foundation is a software-defined storage service for containers. For Telco core clusters, storage support is provided by OpenShift Data Foundation storage services running externally to the application workload cluster. OpenShift Data Foundation supports separation of storage traffic using secondary CNI networks. Limits and requirements In an IPv4/IPv6 dual-stack networking environment, OpenShift Data Foundation uses IPv4 addressing. For more information, see Support OpenShift dual stack with ODF using IPv4 . Engineering considerations OpenShift Data Foundation network traffic should be isolated from other traffic on a dedicated network, for example, by using VLAN isolation. 3.3.3.6.2. Other Storage Other storage solutions can be used to provide persistent storage for core clusters. The configuration and integration of these solutions is outside the scope of the telco core RDS. Integration of the storage solution into the core cluster must include correct sizing and performance analysis to ensure the storage meets overall performance and resource utilization requirements. Additional resources Red Hat OpenShift Data Foundation 3.3.3.7. Monitoring New in this release Not applicable Description The Cluster Monitoring Operator (CMO) is included by default on all OpenShift clusters and provides monitoring (metrics, dashboards, and alerting) for the platform components and optionally user projects as well. Configuration of the monitoring operator allows for customization, including: Default retention period Custom alert rules The default handling of pod CPU and memory metrics is based on upstream Kubernetes cAdvisor and makes a tradeoff that prefers handling of stale data over metric accuracy. This leads to spiky data that will create false triggers of alerts over user-specified thresholds. OpenShift supports an opt-in dedicated service monitor feature creating an additional set of pod CPU and memory metrics that do not suffer from the spiky behavior. For additional information, see this solution guide . In addition to default configuration, the following metrics are expected to be configured for telco core clusters: Pod CPU and memory metrics and alerts for user workloads Limits and requirements Monitoring configuration must enable the dedicated service monitor feature for accurate representation of pod metrics Engineering considerations The Prometheus retention period is specified by the user. The value used is a tradeoff between operational requirements for maintaining historical data on the cluster against CPU and storage resources. Longer retention periods increase the need for storage and require additional CPU to manage the indexing of data. Additional resources About OpenShift Container Platform monitoring 3.3.3.8. Scheduling New in this release NUMA-aware scheduling with the NUMA Resources Operator is now generally available in OpenShift Container Platform 4.14. With this release, you can exclude advertising the Non-Uniform Memory Access (NUMA) node for the SR-IOV network to the Topology Manager. By not advertising the NUMA node for the SR-IOV network, you can permit more flexible SR-IOV network deployments during NUMA-aware pod scheduling. To exclude advertising the NUMA node for the SR-IOV network resource to the Topology Manager, set the value excludeTopology to true in the SriovNetworkNodePolicy CR. For more information, see Exclude the SR-IOV network topology for NUMA-aware scheduling . Description The scheduler is a cluster-wide component responsible for selecting the right node for a given workload. It is a core part of the platform and does not require any specific configuration in the common deployment scenarios. However, there are few specific use cases described in the following section. Limits and requirements The default scheduler does not understand the NUMA locality of workloads. It only knows about the sum of all free resources on a worker node. This might cause workloads to be rejected when scheduled to a node with Topology manager policy set to single-numa-node or restricted . For example, consider a pod requesting 6 CPUs and being scheduled to an empty node that has 4 CPUs per NUMA node. The total allocatable capacity of the node is 8 CPUs and the scheduler will place the pod there. The node local admission will fail, however, as there are only 4 CPUs available in each of the NUMA nodes. All clusters with multi-NUMA nodes are required to use the NUMA Resources Operator . The machineConfigPoolSelector of the NUMA Resources Operator must select all nodes where NUMA aligned scheduling is needed. All machine config pools must have consistent hardware configuration for example all nodes are expected to have the same NUMA zone count. Engineering considerations Pods might require annotations for correct scheduling and isolation. For more information on annotations, see the "CPU Partitioning and performance tuning" section. Additional resources See Controlling pod placement using the scheduler Scheduling NUMA-aware workloads 3.3.3.9. Installation New in this release, Description Telco core clusters can be installed using the Agent Based Installer (ABI). This method allows users to install OpenShift Container Platform on bare metal servers without requiring additional servers or VMs for managing the installation. The ABI installer can be run on any system for example a laptop to generate an ISO installation image. This ISO is used as the installation media for the cluster supervisor nodes. Progress can be monitored using the ABI tool from any system with network connectivity to the supervisor node's API interfaces. Installation from declarative CRs Does not require additional servers to support installation Supports install in disconnected environment Limits and requirements Disconnected installation requires a reachable registry with all required content mirrored. Engineering considerations Networking configuration should be applied as NMState configuration during installation in preference to day-2 configuration by using the NMState Operator. Additional resources Installing an OpenShift Container Platform cluster with the Agent-based Installer 3.3.3.10. Security New in this release DPDK applications that need to inject traffic to the kernel can run in non-privileged pods with the help of the TAP CNI plugin. Furthermore, in this 4.14 release that ability to create a MAC-VLAN, IP-VLAN, and VLAN subinterface based on a master interface in a container namespace is generally available. Description Telco operators are security conscious and require clusters to be hardened against multiple attack vectors. Within OpenShift Container Platform, there is no single component or feature responsible for securing a cluster. This section provides details of security-oriented features and configuration for the use models covered in this specification. SecurityContextConstraints : All workload pods should be run with restricted-v2 or restricted SCC. Seccomp : All pods should be run with the RuntimeDefault (or stronger) seccomp profile. Rootless DPDK pods : Many user-plane networking (DPDK) CNFs require pods to run with root privileges. With this feature, a conformant DPDK pod can be run without requiring root privileges. Storage : The storage network should be isolated and non-routable to other cluster networks. See the "Storage" section for additional details. Limits and requirements Rootless DPDK pods requires the following additional configuration steps: Configure the TAP plugin with the container_t SELinux context. Enable the container_use_devices SELinux boolean on the hosts. Engineering considerations For rootless DPDK pod support, the SELinux boolean container_use_devices must be enabled on the host for the TAP device to be created. This introduces a security risk that is acceptable for short to mid-term use. Other solutions will be explored. Additional resources Managing security context constraints 3.3.3.11. Scalability New in this release Not applicable Description Clusters will scale to the sizing listed in the limits and requirements section. Scaling of workloads is described in the use model section. Limits and requirements Cluster scales to at least 120 nodes Engineering considerations Not applicable 3.3.3.12. Additional configuration 3.3.3.12.1. Disconnected environment Description Telco core clusters are expected to be installed in networks without direct access to the internet. All container images needed to install, configure, and operator the cluster must be available in a disconnected registry. This includes OpenShift Container Platform images, day-2 Operator Lifecycle Manager (OLM) Operator images, and application workload images. The use of a disconnected environment provides multiple benefits, for example: Limiting access to the cluster for security Curated content: The registry is populated based on curated and approved updates for the clusters Limits and requirements A unique name is required for all custom CatalogSources. Do not reuse the default catalog names. A valid time source must be configured as part of cluster installation. Engineering considerations Not applicable 3.3.3.12.2. Kernel New in this release Not applicable Description The user can install the following kernel modules by using MachineConfig to provide extended kernel functionality to CNFs: sctp ip_gre ip6_tables ip6t_REJECT ip6table_filter ip6table_mangle iptable_filter iptable_mangle iptable_nat xt_multiport xt_owner xt_REDIRECT xt_statistic xt_TCPMSS Limits and requirements Use of functionality available through these kernel modules must be analyzed by the user to determine the impact on CPU load, system performance, and ability to sustain KPI. Note Out of tree drivers are not supported. Engineering considerations Not applicable 3.3.4. Telco core 4.14 reference configuration CRs Use the following custom resources (CRs) to configure and deploy OpenShift Container Platform clusters with the telco core profile. Use the CRs to form the common baseline used in all the specific use models unless otherwise indicated. 3.3.4.1. Resource Tuning reference CRs Table 3.11. Resource Tuning CRs Component Reference CR Optional New in this release System reserved capacity control-plane-system-reserved.yaml Yes No System reserved capacity pid-limits-cr.yaml Yes No 3.3.4.2. Storage reference CRs Table 3.12. Storage CRs Component Reference CR Optional New in this release External ODF configuration 01-rook-ceph-external-cluster-details.secret.yaml No Yes External ODF configuration 02-ocs-external-storagecluster.yaml No No External ODF configuration odfNS.yaml No No External ODF configuration odfOperGroup.yaml No No 3.3.4.3. Networking reference CRs Table 3.13. Networking CRs Component Reference CR Optional New in this release Baseline Network.yaml No No Baseline networkAttachmentDefinition.yaml Yes Yes Load balancer addr-pool.yaml No No Load balancer bfd-profile.yaml No No Load balancer bgp-advr.yaml No No Load balancer bgp-peer.yaml No No Load balancer metallb.yaml No No Load balancer metallbNS.yaml Yes No Load balancer metallbOperGroup.yaml Yes No Load balancer metallbSubscription.yaml No No Multus - Tap CNI for rootless DPDK pod mc_rootless_pods_selinux.yaml No No SR-IOV Network Operator sriovNetwork.yaml Yes No SR-IOV Network Operator sriovNetworkNodePolicy.yaml No Yes SR-IOV Network Operator SriovOperatorConfig.yaml No Yes SR-IOV Network Operator SriovSubscription.yaml No No SR-IOV Network Operator SriovSubscriptionNS.yaml No No SR-IOV Network Operator SriovSubscriptionOperGroup.yaml No No 3.3.4.4. Scheduling reference CRs Table 3.14. Scheduling CRs Component Reference CR Optional New in this release NUMA-aware scheduler nrop.yaml No No NUMA-aware scheduler NROPSubscription.yaml No No NUMA-aware scheduler NROPSubscriptionNS.yaml No No NUMA-aware scheduler NROPSubscriptionOperGroup.yaml No No NUMA-aware scheduler sched.yaml No No 3.3.4.5. Other reference CRs Table 3.15. Other CRs Component Reference CR Optional New in this release Additional kernel modules control-plane-load-kernel-modules.yaml Yes No Additional kernel modules sctp_module_mc.yaml Yes No Additional kernel modules worker-load-kernel-modules.yaml Yes No Cluster logging ClusterLogForwarder.yaml No No Cluster logging ClusterLogging.yaml No No Cluster logging ClusterLogNS.yaml No No Cluster logging ClusterLogOperGroup.yaml No No Cluster logging ClusterLogSubscription.yaml No Yes Disconnected configuration catalog-source.yaml No No Disconnected configuration icsp.yaml No No Disconnected configuration operator-hub.yaml No No Monitoring and observability monitoring-config-cm.yaml Yes No Power management PerformanceProfile.yaml No No 3.3.4.6. YAML reference 3.3.4.6.1. Resource tuning reference YAML control-plane-system-reserved.yaml # optional # count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: autosizing-master spec: autoSizingReserved: true machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: "" pid-limits-cr.yaml # optional # count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: 99-change-pidslimit-custom spec: machineConfigPoolSelector: matchLabels: # Set to appropriate MCP pools.operator.machineconfiguration.openshift.io/master: "" containerRuntimeConfig: pidsLimit: USDpidsLimit # Example: #pidsLimit: 4096 3.3.4.6.2. Storage reference YAML 01-rook-ceph-external-cluster-details.secret.yaml # required # count: 1 --- apiVersion: v1 kind: Secret metadata: name: rook-ceph-external-cluster-details namespace: openshift-storage type: Opaque data: # encoded content has been made generic external_cluster_details: eyJuYW1lIjoicm9vay1jZXBoLW1vbi1lbmRwb2ludHMiLCJraW5kIjoiQ29uZmlnTWFwIiwiZGF0YSI6eyJkYXRhIjoiY2VwaHVzYTE9MS4yLjMuNDo2Nzg5IiwibWF4TW9uSWQiOiIwIiwibWFwcGluZyI6Int9In19LHsibmFtZSI6InJvb2stY2VwaC1tb24iLCJraW5kIjoiU2VjcmV0IiwiZGF0YSI6eyJhZG1pbi1zZWNyZXQiOiJhZG1pbi1zZWNyZXQiLCJmc2lkIjoiMTExMTExMTEtMTExMS0xMTExLTExMTEtMTExMTExMTExMTExIiwibW9uLXNlY3JldCI6Im1vbi1zZWNyZXQifX0seyJuYW1lIjoicm9vay1jZXBoLW9wZXJhdG9yLWNyZWRzIiwia2luZCI6IlNlY3JldCIsImRhdGEiOnsidXNlcklEIjoiY2xpZW50LmhlYWx0aGNoZWNrZXIiLCJ1c2VyS2V5IjoiYzJWamNtVjAifX0seyJuYW1lIjoibW9uaXRvcmluZy1lbmRwb2ludCIsImtpbmQiOiJDZXBoQ2x1c3RlciIsImRhdGEiOnsiTW9uaXRvcmluZ0VuZHBvaW50IjoiMS4yLjMuNCwxLjIuMy4zLDEuMi4zLjIiLCJNb25pdG9yaW5nUG9ydCI6IjkyODMifX0seyJuYW1lIjoiY2VwaC1yYmQiLCJraW5kIjoiU3RvcmFnZUNsYXNzIiwiZGF0YSI6eyJwb29sIjoib2RmX3Bvb2wifX0seyJuYW1lIjoicm9vay1jc2ktcmJkLW5vZGUiLCJraW5kIjoiU2VjcmV0IiwiZGF0YSI6eyJ1c2VySUQiOiJjc2ktcmJkLW5vZGUiLCJ1c2VyS2V5IjoiIn19LHsibmFtZSI6InJvb2stY3NpLXJiZC1wcm92aXNpb25lciIsImtpbmQiOiJTZWNyZXQiLCJkYXRhIjp7InVzZXJJRCI6ImNzaS1yYmQtcHJvdmlzaW9uZXIiLCJ1c2VyS2V5IjoiYzJWamNtVjAifX0seyJuYW1lIjoicm9vay1jc2ktY2VwaGZzLXByb3Zpc2lvbmVyIiwia2luZCI6IlNlY3JldCIsImRhdGEiOnsiYWRtaW5JRCI6ImNzaS1jZXBoZnMtcHJvdmlzaW9uZXIiLCJhZG1pbktleSI6IiJ9fSx7Im5hbWUiOiJyb29rLWNzaS1jZXBoZnMtbm9kZSIsImtpbmQiOiJTZWNyZXQiLCJkYXRhIjp7ImFkbWluSUQiOiJjc2ktY2VwaGZzLW5vZGUiLCJhZG1pbktleSI6ImMyVmpjbVYwIn19LHsibmFtZSI6ImNlcGhmcyIsImtpbmQiOiJTdG9yYWdlQ2xhc3MiLCJkYXRhIjp7ImZzTmFtZSI6ImNlcGhmcyIsInBvb2wiOiJtYW5pbGFfZGF0YSJ9fQ== 02-ocs-external-storagecluster.yaml # required # count: 1 --- apiVersion: ocs.openshift.io/v1 kind: StorageCluster metadata: name: ocs-external-storagecluster namespace: openshift-storage spec: externalStorage: enable: true labelSelector: {} odfNS.yaml # required: yes # count: 1 --- apiVersion: v1 kind: Namespace metadata: name: openshift-storage annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: "true" odfOperGroup.yaml # required: yes # count: 1 --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage 3.3.4.6.3. Networking reference YAML Network.yaml # required # count: 1 apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: gatewayConfig: routingViaHost: true # additional networks are optional and may alternatively be specified using NetworkAttachmentDefinition CRs additionalNetworks: [USDadditionalNetworks] # eg #- name: add-net-1 # namespace: app-ns-1 # rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "add-net-1", "plugins": [{"type": "macvlan", "master": "bond1", "ipam": {}}] }' # type: Raw #- name: add-net-2 # namespace: app-ns-1 # rawCNIConfig: '{ "cniVersion": "0.4.0", "name": "add-net-2", "plugins": [ {"type": "macvlan", "master": "bond1", "mode": "private" },{ "type": "tuning", "name": "tuning-arp" }] }' # type: Raw networkAttachmentDefinition.yaml # optional # copies: 0-N apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: USDname namespace: USDns spec: nodeSelector: kubernetes.io/hostname: USDnodeName config: USDconfig #eg #config: '{ # "cniVersion": "0.3.1", # "name": "external-169", # "type": "vlan", # "master": "ens8f0", # "mode": "bridge", # "vlanid": 169, # "ipam": { # "type": "static", # } #}' addr-pool.yaml # required # count: 1-N apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: USDname # eg addresspool3 namespace: metallb-system annotations: metallb.universe.tf/address-pool: USDname # eg addresspool3 spec: ############## # Expected variation in this configuration addresses: [USDpools] #- 3.3.3.0/24 autoAssign: true ############## bfd-profile.yaml # required # count: 1-N apiVersion: metallb.io/v1beta1 kind: BFDProfile metadata: name: bfdprofile namespace: metallb-system spec: ################ # These values may vary. Recommended values are included as default receiveInterval: 150 # default 300ms transmitInterval: 150 # default 300ms #echoInterval: 300 # default 50ms detectMultiplier: 10 # default 3 echoMode: true passiveMode: true minimumTtl: 5 # default 254 # ################ bgp-advr.yaml # required # count: 1-N apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: USDname # eg bgpadvertisement-1 namespace: metallb-system spec: ipAddressPools: [USDpool] # eg: # - addresspool3 peers: [USDpeers] # eg: # - peer-one communities: [USDcommunities] # Note correlation with address pool. # eg: # - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100 bgp-peer.yaml # required # count: 1-N apiVersion: metallb.io/v1beta1 kind: BGPPeer metadata: name: USDname namespace: metallb-system spec: peerAddress: USDip # eg 192.168.1.2 peerASN: USDpeerasn # eg 64501 myASN: USDmyasn # eg 64500 routerID: USDid # eg 10.10.10.10 bfdProfile: bfdprofile metallb.yaml # required # count: 1 apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: nodeSelector: node-role.kubernetes.io/worker: "" metallbNS.yaml # required: yes # count: 1 --- apiVersion: v1 kind: Namespace metadata: name: metallb-system annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: "true" metallbOperGroup.yaml # required: yes # count: 1 --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator namespace: metallb-system metallbSubscription.yaml # required: yes # count: 1 --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator-sub namespace: metallb-system spec: channel: stable name: metallb-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic mc_rootless_pods_selinux.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux boolean for tap cni plugin Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_use_devices=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service sriovNetwork.yaml # optional (though expected for all) # count: 0-N apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: USDname # eg sriov-network-abcd namespace: openshift-sriov-network-operator spec: capabilities: "USDcapabilities" # eg '{"mac": true, "ips": true}' ipam: "USDipam" # eg '{ "type": "host-local", "subnet": "10.3.38.0/24" }' networkNamespace: USDnns # eg cni-test resourceName: USDresource # eg resourceTest sriovNetworkNodePolicy.yaml # optional (though expected in all deployments) # count: 0-N apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USDname namespace: openshift-sriov-network-operator spec: {} # USDspec # eg #deviceType: netdevice #nicSelector: # deviceID: "1593" # pfNames: # - ens8f0np0#0-9 # rootDevices: # - 0000:d8:00.0 # vendor: "8086" #nodeSelector: # kubernetes.io/hostname: host.sample.lab #numVfs: 20 #priority: 99 #excludeTopology: true #resourceName: resourceNameABCD SriovOperatorConfig.yaml # required # count: 1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: node-role.kubernetes.io/worker: "" enableInjector: true enableOperatorWebhook: true SriovSubscription.yaml # required: yes # count: 1 apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: "stable" name: sriov-network-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic SriovSubscriptionNS.yaml # required: yes # count: 1 apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management SriovSubscriptionOperGroup.yaml # required: yes # count: 1 apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator 3.3.4.6.4. Scheduling reference YAML nrop.yaml # Optional # count: 1 apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - config: # Periodic is the default setting infoRefreshMode: Periodic machineConfigPoolSelector: matchLabels: # This label must match the pool(s) you want to run NUMA-aligned workloads pools.operator.machineconfiguration.openshift.io/worker: "" NROPSubscription.yaml # required # count: 1 apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: numaresources-operator namespace: openshift-numaresources spec: channel: "4.14" name: numaresources-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace NROPSubscriptionNS.yaml # required: yes # count: 1 apiVersion: v1 kind: Namespace metadata: name: openshift-numaresources annotations: workload.openshift.io/allowed: management NROPSubscriptionOperGroup.yaml # required: yes # count: 1 apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: numaresources-operator namespace: openshift-numaresources spec: targetNamespaces: - openshift-numaresources sched.yaml # Optional # count: 1 apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: #cacheResyncPeriod: "0" # Image spec should be the latest for the release imageSpec: "registry.redhat.io/openshift4/noderesourcetopology-scheduler-rhel9:v4.14.0" #logLevel: "Trace" schedulerName: topo-aware-scheduler 3.3.4.6.5. Other reference YAML control-plane-load-kernel-modules.yaml # optional # count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 40-load-kernel-modules-control-plane spec: config: # Release info found in https://github.com/coreos/butane/releases ignition: version: 3.2.0 storage: files: - contents: source: data:, mode: 420 overwrite: true path: /etc/modprobe.d/kernel-blacklist.conf - contents: source: data:text/plain;charset=utf-8;base64,aXBfZ3JlCmlwNl90YWJsZXMKaXA2dF9SRUpFQ1QKaXA2dGFibGVfZmlsdGVyCmlwNnRhYmxlX21hbmdsZQppcHRhYmxlX2ZpbHRlcgppcHRhYmxlX21hbmdsZQppcHRhYmxlX25hdAp4dF9tdWx0aXBvcnQKeHRfb3duZXIKeHRfUkVESVJFQ1QKeHRfc3RhdGlzdGljCnh0X1RDUE1TUwp4dF91MzI= mode: 420 overwrite: true path: /etc/modules-load.d/kernel-load.conf sctp_module_mc.yaml # optional # count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: load-sctp-module spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8;base64,c2N0cA== filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf worker-load-kernel-modules.yaml # optional # count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 40-load-kernel-modules-worker spec: config: # Release info found in https://github.com/coreos/butane/releases ignition: version: 3.2.0 storage: files: - contents: source: data:, mode: 420 overwrite: true path: /etc/modprobe.d/kernel-blacklist.conf - contents: source: data:text/plain;charset=utf-8;base64,aXBfZ3JlCmlwNl90YWJsZXMKaXA2dF9SRUpFQ1QKaXA2dGFibGVfZmlsdGVyCmlwNnRhYmxlX21hbmdsZQppcHRhYmxlX2ZpbHRlcgppcHRhYmxlX21hbmdsZQppcHRhYmxlX25hdAp4dF9tdWx0aXBvcnQKeHRfb3duZXIKeHRfUkVESVJFQ1QKeHRfc3RhdGlzdGljCnh0X1RDUE1TUwp4dF91MzI= mode: 420 overwrite: true path: /etc/modules-load.d/kernel-load.conf ClusterLogForwarder.yaml # required # count: 1 apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - type: "kafka" name: kafka-open url: tcp://10.11.12.13:9092/test pipelines: - inputRefs: - infrastructure #- application - audit labels: label1: test1 label2: test2 label3: test3 label4: test4 label5: test5 name: all-to-default outputRefs: - kafka-open ClusterLogging.yaml # required # count: 1 apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: type: vector managementState: Managed ClusterLogNS.yaml --- apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: workload.openshift.io/allowed: management ClusterLogOperGroup.yaml --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging spec: targetNamespaces: - openshift-logging ClusterLogSubscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging spec: channel: "stable" name: cluster-logging source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic catalog-source.yaml # required # count: 1..N apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: redhat-operators-disconnected namespace: openshift-marketplace spec: displayName: Red Hat Disconnected Operators Catalog image: USDimageUrl publisher: Red Hat sourceType: grpc # updateStrategy: # registryPoll: # interval: 1h #status: # connectionState: # lastObservedState: READY icsp.yaml # required # count: 1 apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: disconnected-internal-icsp spec: repositoryDigestMirrors: [] # - USDmirrors operator-hub.yaml # required # count: 1 apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true monitoring-config-cm.yaml # optional # count: 1 --- apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | k8sPrometheusAdapter: dedicatedServiceMonitors: enabled: true prometheusK8s: retention: 15d volumeClaimTemplate: spec: storageClassName: ocs-external-storagecluster-ceph-rbd resources: requests: storage: 100Gi alertmanagerMain: volumeClaimTemplate: spec: storageClassName: ocs-external-storagecluster-ceph-rbd resources: requests: storage: 20Gi PerformanceProfile.yaml # required # count: 1 apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: USDname annotations: # Some pods want the kernel stack to ignore IPv6 router Advertisement. kubeletconfig.experimental: | {"allowedUnsafeSysctls":["net.ipv6.conf.all.accept_ra"]} spec: cpu: # node0 CPUs: 0-17,36-53 # node1 CPUs: 18-34,54-71 # siblings: (0,36), (1,37)... # we want to reserve the first Core of each NUMA socket # # no CPU left behind! all-cpus == isolated + reserved isolated: USDisolated # eg 1-17,19-35,37-53,55-71 reserved: USDreserved # eg 0,18,36,54 # Guaranteed QoS pods will disable IRQ balancing for cores allocated to the pod. # default value of globallyDisableIrqLoadBalancing is false globallyDisableIrqLoadBalancing: false hugepages: defaultHugepagesSize: 1G pages: # 32GB per numa node - count: USDcount # eg 64 size: 1G machineConfigPoolSelector: # For SNO: machineconfiguration.openshift.io/role: 'master' pools.operator.machineconfiguration.openshift.io/worker: '' nodeSelector: # For SNO: node-role.kubernetes.io/master: "" node-role.kubernetes.io/worker: "" workloadHints: realTime: false highPowerConsumption: false perPodPowerManagement: true realTimeKernel: enabled: false numa: # All guaranteed QoS containers get resources from a single NUMA node topologyPolicy: "single-numa-node" net: userLevelNetworking: false | [
"query=avg_over_time(pod:container_cpu_usage:sum{namespace=\"openshift-kube-apiserver\"}[30m])",
"nodes: - hostName: \"example-node1.example.com\" ironicInspect: \"enabled\"",
"apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: storage-lvmcluster namespace: openshift-storage annotations: ran.openshift.io/ztp-deploy-wave: \"10\" spec: {} storage: deviceClasses: - name: vg1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10",
"cpuPartitioningMode: AllNodes",
"apiVersion: ran.openshift.io/v1alpha1 kind: PreCachingConfig metadata: name: example-config namespace: example-ns spec: additionalImages: - quay.io/foobar/application1@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e - quay.io/foobar/application2@sha256:3d5800123dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47adf - quay.io/foobar/applicationN@sha256:4fe1334adfafadsf987123adfffdaf1243340adfafdedga0991234afdadfs spaceRequired: 45 GiB 1 overrides: preCacheImage: quay.io/test_images/pre-cache:latest platformImage: quay.io/openshift-release-dev/ocp-release@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e operatorsIndexes: - registry.example.com:5000/custom-redhat-operators:1.0.0 operatorsPackagesAndChannels: - local-storage-operator: stable - ptp-operator: stable - sriov-network-operator: stable excludePrecachePatterns: 2 - aws - vsphere",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging annotations: {} spec: outputs: USDoutputs pipelines: USDpipelines",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging annotations: {} spec: managementState: \"Managed\" collection: logs: type: \"vector\"",
"--- apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: workload.openshift.io/allowed: management",
"--- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging annotations: {} spec: targetNamespaces: - openshift-logging",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging annotations: {} spec: channel: \"stable\" name: cluster-logging source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: {} name: example-storage-class provisioner: kubernetes.io/no-provisioner reclaimPolicy: Delete",
"apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" annotations: {} spec: logLevel: Normal managementState: Managed storageClassDevices: # The list of storage classes and associated devicePaths need to be specified like this example: - storageClassName: \"example-storage-class\" volumeMode: Filesystem fsType: xfs # The below must be adjusted to the hardware. # For stability and reliability, it's recommended to use persistent # naming conventions for devicePaths, such as /dev/disk/by-path. devicePaths: - /dev/disk/by-path/pci-0000:05:00.0-nvme-1 #--- ## How to verify ## 1. Create a PVC apiVersion: v1 kind: PersistentVolumeClaim metadata: name: local-pvc-name spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi storageClassName: example-storage-class #--- ## 2. Create a pod that mounts it apiVersion: v1 kind: Pod metadata: labels: run: busybox name: busybox spec: containers: - image: quay.io/quay/busybox:latest name: busybox resources: {} command: [\"/bin/sh\", \"-c\", \"sleep infinity\"] volumeMounts: - name: local-pvc mountPath: /data volumes: - name: local-pvc persistentVolumeClaim: claimName: local-pvc-name dnsPolicy: ClusterFirst restartPolicy: Always ## 3. Run the pod on the cluster and verify the size and access of the `/data` mount",
"apiVersion: v1 kind: Namespace metadata: name: openshift-local-storage annotations: workload.openshift.io/allowed: management",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-local-storage namespace: openshift-local-storage annotations: {} spec: targetNamespaces: - openshift-local-storage",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage annotations: {} spec: channel: \"stable\" name: local-storage-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: \"ran-du.redhat.com\" spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" - \"vfio_pci.enable_sriov=1\" - \"vfio_pci.disable_idle_d3=1\" - \"module_blacklist=irdma\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" nodeSelector: node-role.kubernetes.io/USDmcp: \"\" numa: topologyPolicy: \"restricted\" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: performance-patch namespace: openshift-cluster-node-tuning-operator annotations: {} spec: profile: - name: performance-patch # Please note: # - The 'include' line must match the associated PerformanceProfile name, following below pattern # include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # - When using the standard (non-realtime) kernel, remove the kernel.timer_migration override from # the [sysctl] section and remove the entire section if it is empty. data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* group.ice-gnss=0:f:10:*:ice-gnss.* [service] service.stalld=start,enable service.chronyd=stop,disable recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"USDmcp\" priority: 19 profile: performance-patch",
"apiVersion: ptp.openshift.io/v1 kind: PtpOperatorConfig metadata: name: default namespace: openshift-ptp annotations: {} spec: daemonNodeSelector: node-role.kubernetes.io/USDmcp: \"\" ptpEventConfig: enableEventPublisher: true transportHost: \"http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043\"",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary namespace: openshift-ptp annotations: {} spec: profile: - name: \"boundary\" ptp4lOpts: \"-2\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | # The interface name is hardware-specific [USDiface_slave] masterOnly 0 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 135 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"boundary\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"The grandmaster profile is provided for testing only It is not installed on production clusters apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: \"grandmaster\" ptp4lOpts: \"-2 --summary_interval -4\" phc2sysOpts: -r -u 0 -m -O -37 -N 8 -R 16 -s USDiface_master -n 24 ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" plugins: e810: enableDefaultConfig: false settings: LocalMaxHoldoverOffSet: 1500 LocalHoldoverTimeout: 14400 MaxInSpecOffset: 100 pins: USDe810_pins # \"USDiface_master\": # \"U.FL2\": \"0 2\" # \"U.FL1\": \"0 1\" # \"SMA2\": \"0 2\" # \"SMA1\": \"0 1\" ublxCmds: - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 - \"-P\" - \"29.20\" - \"-z\" - \"CFG-HW-ANT_CFG_VOLTCTRL,1\" reportOutput: false - args: #ubxtool -P 29.20 -e GPS - \"-P\" - \"29.20\" - \"-e\" - \"GPS\" reportOutput: false - args: #ubxtool -P 29.20 -d Galileo - \"-P\" - \"29.20\" - \"-d\" - \"Galileo\" reportOutput: false - args: #ubxtool -P 29.20 -d GLONASS - \"-P\" - \"29.20\" - \"-d\" - \"GLONASS\" reportOutput: false - args: #ubxtool -P 29.20 -d BeiDou - \"-P\" - \"29.20\" - \"-d\" - \"BeiDou\" reportOutput: false - args: #ubxtool -P 29.20 -d SBAS - \"-P\" - \"29.20\" - \"-d\" - \"SBAS\" reportOutput: false - args: #ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000 - \"-P\" - \"29.20\" - \"-t\" - \"-w\" - \"5\" - \"-v\" - \"1\" - \"-e\" - \"SURVEYIN,600,50000\" reportOutput: true - args: #ubxtool -P 29.20 -p MON-HW - \"-P\" - \"29.20\" - \"-p\" - \"MON-HW\" reportOutput: true - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,300 - \"-P\" - \"29.20\" - \"-p\" - \"CFG-MSG,1,38,300\" reportOutput: true ts2phcOpts: \" \" ts2phcConf: | [nmea] ts2phc.master 1 [global] use_syslog 0 verbose 1 logging_level 7 ts2phc.pulsewidth 100000000 #GNSS module s /dev/ttyGNSS* -al use _0 #cat /dev/ttyGNSS_1700_0 to find available serial port #example value of gnss_serialport is /dev/ttyGNSS_1700_0 ts2phc.nmea_serialport USDgnss_serialport leapfile /usr/share/zoneinfo/leap-seconds.list [USDiface_master] ts2phc.extts_polarity rising ts2phc.extts_correction 0 ptp4lConf: | [USDiface_master] masterOnly 1 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 6 clockAccuracy 0x27 offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval 0 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0x20 recommend: - profile: \"grandmaster\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ordinary namespace: openshift-ptp annotations: {} spec: profile: - name: \"ordinary\" # The interface name is hardware-specific interface: USDinterface ptp4lOpts: \"-2 -s\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"ordinary\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"--- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp annotations: {} spec: channel: \"stable\" name: ptp-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown",
"--- apiVersion: v1 kind: Namespace metadata: name: openshift-ptp annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: \"true\"",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp annotations: {} spec: targetNamespaces: - openshift-ptp",
"apiVersion: v1 kind: Namespace metadata: name: vran-acceleration-operators annotations: {}",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: vran-operators namespace: vran-acceleration-operators annotations: {} spec: targetNamespaces: - vran-acceleration-operators",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-fec-subscription namespace: vran-acceleration-operators annotations: {} spec: channel: stable name: sriov-fec source: certified-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown",
"apiVersion: sriovfec.intel.com/v2 kind: SriovFecClusterConfig metadata: name: config namespace: vran-acceleration-operators annotations: {} spec: drainSkip: USDdrainSkip # true if SNO, false by default priority: 1 nodeSelector: node-role.kubernetes.io/master: \"\" acceleratorSelector: pciAddress: USDpciAddress physicalFunction: pfDriver: \"vfio-pci\" vfDriver: \"vfio-pci\" vfAmount: 16 bbDevConfig: USDbbDevConfig #Recommended configuration for Intel ACC100 (Mount Bryce) FPGA here: https://github.com/smart-edge-open/openshift-operator/blob/main/spec/openshift-sriov-fec-operator.md#sample-cr-for-wireless-fec-acc100 #Recommended configuration for Intel N3000 FPGA here: https://github.com/smart-edge-open/openshift-operator/blob/main/spec/openshift-sriov-fec-operator.md#sample-cr-for-wireless-fec-n3000",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: \"\" namespace: openshift-sriov-network-operator annotations: {} spec: # resourceName: \"\" networkNamespace: openshift-sriov-network-operator vlan: \"\" spoofChk: \"\" ipam: \"\" linkState: \"\" maxTxRate: \"\" minTxRate: \"\" vlanQoS: \"\" trust: \"\" capabilities: \"\"",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USDname namespace: openshift-sriov-network-operator annotations: {} spec: # The attributes for Mellanox/Intel based NICs as below. # deviceType: netdevice/vfio-pci # isRdma: true/false deviceType: USDdeviceType isRdma: USDisRdma nicSelector: # The exact physical function name must match the hardware used pfNames: [USDpfNames] nodeSelector: node-role.kubernetes.io/USDmcp: \"\" numVfs: USDnumVfs priority: USDpriority resourceName: USDresourceName",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator annotations: {} spec: configDaemonNodeSelector: \"node-role.kubernetes.io/USDmcp\": \"\" # Injector and OperatorWebhook pods can be disabled (set to \"false\") below # to reduce the number of management pods. It is recommended to start with the # webhook and injector pods enabled, and only disable them after verifying the # correctness of user manifests. # If the injector is disabled, containers using sr-iov resources must explicitly assign # them in the \"requests\"/\"limits\" section of the container spec, for example: # containers: # - name: my-sriov-workload-container # resources: # limits: # openshift.io/<resource_name>: \"1\" # requests: # openshift.io/<resource_name>: \"1\" enableInjector: true enableOperatorWebhook: true logLevel: 0",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator annotations: {} spec: channel: \"stable\" name: sriov-network-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown",
"apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator annotations: {} spec: targetNamespaces: - openshift-sriov-network-operator",
"example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"example-sno\" namespace: \"example-sno\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.10\" sshPublicKey: \"ssh-rsa AAAA...\" clusters: - clusterName: \"example-sno\" networkType: \"OVNKubernetes\" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all but the marketplace component from the optional set of # components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier installConfigOverrides: | { \"capabilities\": { \"baselineCapabilitySet\": \"None\", \"additionalEnabledCapabilities\": [ \"NodeTuning\", \"OperatorLifecycleManager\" ] } } # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+. # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest. # extraManifestPath: sno-extra-manifest clusterLabels: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: \"latest\" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: true # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: \"\"' group-du-sno: \"\" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: \"example-sno\"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites : \"example-sno\" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster: #crTemplates: # KlusterletAddonConfig: \"KlusterletAddonConfigOverride.yaml\" nodes: - hostName: \"example-node1.example.com\" role: \"master\" # Optionally; This can be used to configure desired BIOS setting on a host: #biosConfigRef: # filePath: \"example-hw.profile\" bmcAddress: \"idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"example-node1-bmh-secret\" bootMACAddress: \"AA:BB:CC:DD:EE:11\" # Use UEFISecureBoot to enable secure boot bootMode: \"UEFI\" rootDeviceHints: deviceName: \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\" # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\", \"partitions\": [ { \"label\": \"var-lib-containers\", \"sizeMiB\": 0, \"startMiB\": 250000 } ], \"wipeTable\": false } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var-lib-containers\", \"format\": \"xfs\", \"mountOptions\": [ \"defaults\", \"prjquota\" ], \"path\": \"/var/lib/containers\", \"wipeFilesystem\": true } ] }, \"systemd\": { \"units\": [ { \"contents\": \"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\", \"enabled\": true, \"name\": \"var-lib-containers.mount\" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: \"AA:BB:CC:DD:EE:11\" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster annotations: {} spec: disableNetworkDiagnostics: true",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring annotations: {} data: config.yaml: | grafana: enabled: false alertmanagerMain: enabled: false telemeterClient: enabled: false prometheusK8s: retention: 24h",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: default-cat-source namespace: openshift-marketplace annotations: target.workload.openshift.io/management: '{\"effect\": \"PreferredDuringScheduling\"}' spec: displayName: default-cat-source image: USDimageUrl publisher: Red Hat sourceType: grpc updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: disconnected-internal-icsp annotations: {} spec: repositoryDigestMirrors: - USDmirrors",
"apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster annotations: {} spec: disableAllDefaultSources: true",
"apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-master spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: \"\" containerRuntimeConfig: defaultRuntime: crun",
"apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-worker spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" containerRuntimeConfig: defaultRuntime: crun",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-crio-disable-wipe-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-crio-disable-wipe-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 05-kdump-config-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump-remove-ice-module.service contents: | [Unit] Description=Remove ice module when doing kdump Before=kdump.service [Service] Type=oneshot RemainAfterExit=true ExecStart=/usr/local/bin/kdump-remove-ice-module.sh [Install] WantedBy=multi-user.target storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaAoKIyBUaGlzIHNjcmlwdCByZW1vdmVzIHRoZSBpY2UgbW9kdWxlIGZyb20ga2R1bXAgdG8gcHJldmVudCBrZHVtcCBmYWlsdXJlcyBvbiBjZXJ0YWluIHNlcnZlcnMuCiMgVGhpcyBpcyBhIHRlbXBvcmFyeSB3b3JrYXJvdW5kIGZvciBSSEVMUExBTi0xMzgyMzYgYW5kIGNhbiBiZSByZW1vdmVkIHdoZW4gdGhhdCBpc3N1ZSBpcwojIGZpeGVkLgoKc2V0IC14CgpTRUQ9Ii91c3IvYmluL3NlZCIKR1JFUD0iL3Vzci9iaW4vZ3JlcCIKCiMgb3ZlcnJpZGUgZm9yIHRlc3RpbmcgcHVycG9zZXMKS0RVTVBfQ09ORj0iJHsxOi0vZXRjL3N5c2NvbmZpZy9rZHVtcH0iClJFTU9WRV9JQ0VfU1RSPSJtb2R1bGVfYmxhY2tsaXN0PWljZSIKCiMgZXhpdCBpZiBmaWxlIGRvZXNuJ3QgZXhpc3QKWyAhIC1mICR7S0RVTVBfQ09ORn0gXSAmJiBleGl0IDAKCiMgZXhpdCBpZiBmaWxlIGFscmVhZHkgdXBkYXRlZAoke0dSRVB9IC1GcSAke1JFTU9WRV9JQ0VfU1RSfSAke0tEVU1QX0NPTkZ9ICYmIGV4aXQgMAoKIyBUYXJnZXQgbGluZSBsb29rcyBzb21ldGhpbmcgbGlrZSB0aGlzOgojIEtEVU1QX0NPTU1BTkRMSU5FX0FQUEVORD0iaXJxcG9sbCBucl9jcHVzPTEgLi4uIGhlc3RfZGlzYWJsZSIKIyBVc2Ugc2VkIHRvIG1hdGNoIGV2ZXJ5dGhpbmcgYmV0d2VlbiB0aGUgcXVvdGVzIGFuZCBhcHBlbmQgdGhlIFJFTU9WRV9JQ0VfU1RSIHRvIGl0CiR7U0VEfSAtaSAncy9eS0RVTVBfQ09NTUFORExJTkVfQVBQRU5EPSJbXiJdKi8mICcke1JFTU9WRV9JQ0VfU1RSfScvJyAke0tEVU1QX0NPTkZ9IHx8IGV4aXQgMAo= mode: 448 path: /usr/local/bin/kdump-remove-ice-module.sh",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-kdump-config-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump-remove-ice-module.service contents: | [Unit] Description=Remove ice module when doing kdump Before=kdump.service [Service] Type=oneshot RemainAfterExit=true ExecStart=/usr/local/bin/kdump-remove-ice-module.sh [Install] WantedBy=multi-user.target storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaAoKIyBUaGlzIHNjcmlwdCByZW1vdmVzIHRoZSBpY2UgbW9kdWxlIGZyb20ga2R1bXAgdG8gcHJldmVudCBrZHVtcCBmYWlsdXJlcyBvbiBjZXJ0YWluIHNlcnZlcnMuCiMgVGhpcyBpcyBhIHRlbXBvcmFyeSB3b3JrYXJvdW5kIGZvciBSSEVMUExBTi0xMzgyMzYgYW5kIGNhbiBiZSByZW1vdmVkIHdoZW4gdGhhdCBpc3N1ZSBpcwojIGZpeGVkLgoKc2V0IC14CgpTRUQ9Ii91c3IvYmluL3NlZCIKR1JFUD0iL3Vzci9iaW4vZ3JlcCIKCiMgb3ZlcnJpZGUgZm9yIHRlc3RpbmcgcHVycG9zZXMKS0RVTVBfQ09ORj0iJHsxOi0vZXRjL3N5c2NvbmZpZy9rZHVtcH0iClJFTU9WRV9JQ0VfU1RSPSJtb2R1bGVfYmxhY2tsaXN0PWljZSIKCiMgZXhpdCBpZiBmaWxlIGRvZXNuJ3QgZXhpc3QKWyAhIC1mICR7S0RVTVBfQ09ORn0gXSAmJiBleGl0IDAKCiMgZXhpdCBpZiBmaWxlIGFscmVhZHkgdXBkYXRlZAoke0dSRVB9IC1GcSAke1JFTU9WRV9JQ0VfU1RSfSAke0tEVU1QX0NPTkZ9ICYmIGV4aXQgMAoKIyBUYXJnZXQgbGluZSBsb29rcyBzb21ldGhpbmcgbGlrZSB0aGlzOgojIEtEVU1QX0NPTU1BTkRMSU5FX0FQUEVORD0iaXJxcG9sbCBucl9jcHVzPTEgLi4uIGhlc3RfZGlzYWJsZSIKIyBVc2Ugc2VkIHRvIG1hdGNoIGV2ZXJ5dGhpbmcgYmV0d2VlbiB0aGUgcXVvdGVzIGFuZCBhcHBlbmQgdGhlIFJFTU9WRV9JQ0VfU1RSIHRvIGl0CiR7U0VEfSAtaSAncy9eS0RVTVBfQ09NTUFORExJTkVfQVBQRU5EPSJbXiJdKi8mICcke1JFTU9WRV9JQ0VfU1RSfScvJyAke0tEVU1QX0NPTkZ9IHx8IGV4aXQgMAo= mode: 448 path: /usr/local/bin/kdump-remove-ice-module.sh",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 06-kdump-enable-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 06-kdump-enable-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: container-mount-namespace-and-kubelet-conf-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c \"findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}\" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART}\" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART} --housekeeping-interval=30s\" name: 90-container-mount-namespace.conf - contents: | [Service] Environment=\"OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s\" Environment=\"OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s\" name: 30-kubelet-interval-tuning.conf name: kubelet.service",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: container-mount-namespace-and-kubelet-conf-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c \"findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}\" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART}\" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART} --housekeeping-interval=30s\" name: 90-container-mount-namespace.conf - contents: | [Service] Environment=\"OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s\" Environment=\"OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s\" name: 30-kubelet-interval-tuning.conf name: kubelet.service",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-sync-time-once-master spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network.service [Service] Type=oneshot TimeoutStartSec=300 ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-sync-time-once-worker spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network.service [Service] Type=oneshot TimeoutStartSec=300 ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: load-sctp-module-master spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: load-sctp-module-worker spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 07-sriov-related-kernel-args-master spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on - iommu=pt",
"cpu-load-balancing.crio.io: \"disable\" cpu-quota.crio.io: \"disable\" irq-load-balancing.crio.io: \"disable\"",
"cpu-c-states.crio.io: \"disable\" cpu-freq-governor.crio.io: \"performance\"",
"optional count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: autosizing-master spec: autoSizingReserved: true machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: \"\"",
"optional count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: 99-change-pidslimit-custom spec: machineConfigPoolSelector: matchLabels: # Set to appropriate MCP pools.operator.machineconfiguration.openshift.io/master: \"\" containerRuntimeConfig: pidsLimit: USDpidsLimit # Example: #pidsLimit: 4096",
"required count: 1 --- apiVersion: v1 kind: Secret metadata: name: rook-ceph-external-cluster-details namespace: openshift-storage type: Opaque data: # encoded content has been made generic external_cluster_details: eyJuYW1lIjoicm9vay1jZXBoLW1vbi1lbmRwb2ludHMiLCJraW5kIjoiQ29uZmlnTWFwIiwiZGF0YSI6eyJkYXRhIjoiY2VwaHVzYTE9MS4yLjMuNDo2Nzg5IiwibWF4TW9uSWQiOiIwIiwibWFwcGluZyI6Int9In19LHsibmFtZSI6InJvb2stY2VwaC1tb24iLCJraW5kIjoiU2VjcmV0IiwiZGF0YSI6eyJhZG1pbi1zZWNyZXQiOiJhZG1pbi1zZWNyZXQiLCJmc2lkIjoiMTExMTExMTEtMTExMS0xMTExLTExMTEtMTExMTExMTExMTExIiwibW9uLXNlY3JldCI6Im1vbi1zZWNyZXQifX0seyJuYW1lIjoicm9vay1jZXBoLW9wZXJhdG9yLWNyZWRzIiwia2luZCI6IlNlY3JldCIsImRhdGEiOnsidXNlcklEIjoiY2xpZW50LmhlYWx0aGNoZWNrZXIiLCJ1c2VyS2V5IjoiYzJWamNtVjAifX0seyJuYW1lIjoibW9uaXRvcmluZy1lbmRwb2ludCIsImtpbmQiOiJDZXBoQ2x1c3RlciIsImRhdGEiOnsiTW9uaXRvcmluZ0VuZHBvaW50IjoiMS4yLjMuNCwxLjIuMy4zLDEuMi4zLjIiLCJNb25pdG9yaW5nUG9ydCI6IjkyODMifX0seyJuYW1lIjoiY2VwaC1yYmQiLCJraW5kIjoiU3RvcmFnZUNsYXNzIiwiZGF0YSI6eyJwb29sIjoib2RmX3Bvb2wifX0seyJuYW1lIjoicm9vay1jc2ktcmJkLW5vZGUiLCJraW5kIjoiU2VjcmV0IiwiZGF0YSI6eyJ1c2VySUQiOiJjc2ktcmJkLW5vZGUiLCJ1c2VyS2V5IjoiIn19LHsibmFtZSI6InJvb2stY3NpLXJiZC1wcm92aXNpb25lciIsImtpbmQiOiJTZWNyZXQiLCJkYXRhIjp7InVzZXJJRCI6ImNzaS1yYmQtcHJvdmlzaW9uZXIiLCJ1c2VyS2V5IjoiYzJWamNtVjAifX0seyJuYW1lIjoicm9vay1jc2ktY2VwaGZzLXByb3Zpc2lvbmVyIiwia2luZCI6IlNlY3JldCIsImRhdGEiOnsiYWRtaW5JRCI6ImNzaS1jZXBoZnMtcHJvdmlzaW9uZXIiLCJhZG1pbktleSI6IiJ9fSx7Im5hbWUiOiJyb29rLWNzaS1jZXBoZnMtbm9kZSIsImtpbmQiOiJTZWNyZXQiLCJkYXRhIjp7ImFkbWluSUQiOiJjc2ktY2VwaGZzLW5vZGUiLCJhZG1pbktleSI6ImMyVmpjbVYwIn19LHsibmFtZSI6ImNlcGhmcyIsImtpbmQiOiJTdG9yYWdlQ2xhc3MiLCJkYXRhIjp7ImZzTmFtZSI6ImNlcGhmcyIsInBvb2wiOiJtYW5pbGFfZGF0YSJ9fQ==",
"required count: 1 --- apiVersion: ocs.openshift.io/v1 kind: StorageCluster metadata: name: ocs-external-storagecluster namespace: openshift-storage spec: externalStorage: enable: true labelSelector: {}",
"required: yes count: 1 --- apiVersion: v1 kind: Namespace metadata: name: openshift-storage annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: \"true\"",
"required: yes count: 1 --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage",
"required count: 1 apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: gatewayConfig: routingViaHost: true # additional networks are optional and may alternatively be specified using NetworkAttachmentDefinition CRs additionalNetworks: [USDadditionalNetworks] # eg #- name: add-net-1 # namespace: app-ns-1 # rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"add-net-1\", \"plugins\": [{\"type\": \"macvlan\", \"master\": \"bond1\", \"ipam\": {}}] }' # type: Raw #- name: add-net-2 # namespace: app-ns-1 # rawCNIConfig: '{ \"cniVersion\": \"0.4.0\", \"name\": \"add-net-2\", \"plugins\": [ {\"type\": \"macvlan\", \"master\": \"bond1\", \"mode\": \"private\" },{ \"type\": \"tuning\", \"name\": \"tuning-arp\" }] }' # type: Raw",
"optional copies: 0-N apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: USDname namespace: USDns spec: nodeSelector: kubernetes.io/hostname: USDnodeName config: USDconfig #eg #config: '{ # \"cniVersion\": \"0.3.1\", # \"name\": \"external-169\", # \"type\": \"vlan\", # \"master\": \"ens8f0\", # \"mode\": \"bridge\", # \"vlanid\": 169, # \"ipam\": { # \"type\": \"static\", # } #}'",
"required count: 1-N apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: USDname # eg addresspool3 namespace: metallb-system annotations: metallb.universe.tf/address-pool: USDname # eg addresspool3 spec: ############## # Expected variation in this configuration addresses: [USDpools] #- 3.3.3.0/24 autoAssign: true ##############",
"required count: 1-N apiVersion: metallb.io/v1beta1 kind: BFDProfile metadata: name: bfdprofile namespace: metallb-system spec: ################ # These values may vary. Recommended values are included as default receiveInterval: 150 # default 300ms transmitInterval: 150 # default 300ms #echoInterval: 300 # default 50ms detectMultiplier: 10 # default 3 echoMode: true passiveMode: true minimumTtl: 5 # default 254 # ################",
"required count: 1-N apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: USDname # eg bgpadvertisement-1 namespace: metallb-system spec: ipAddressPools: [USDpool] # eg: # - addresspool3 peers: [USDpeers] # eg: # - peer-one communities: [USDcommunities] # Note correlation with address pool. # eg: # - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100",
"required count: 1-N apiVersion: metallb.io/v1beta1 kind: BGPPeer metadata: name: USDname namespace: metallb-system spec: peerAddress: USDip # eg 192.168.1.2 peerASN: USDpeerasn # eg 64501 myASN: USDmyasn # eg 64500 routerID: USDid # eg 10.10.10.10 bfdProfile: bfdprofile",
"required count: 1 apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: nodeSelector: node-role.kubernetes.io/worker: \"\"",
"required: yes count: 1 --- apiVersion: v1 kind: Namespace metadata: name: metallb-system annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: \"true\"",
"required: yes count: 1 --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator namespace: metallb-system",
"required: yes count: 1 --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator-sub namespace: metallb-system spec: channel: stable name: metallb-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux boolean for tap cni plugin Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_use_devices=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service",
"optional (though expected for all) count: 0-N apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: USDname # eg sriov-network-abcd namespace: openshift-sriov-network-operator spec: capabilities: \"USDcapabilities\" # eg '{\"mac\": true, \"ips\": true}' ipam: \"USDipam\" # eg '{ \"type\": \"host-local\", \"subnet\": \"10.3.38.0/24\" }' networkNamespace: USDnns # eg cni-test resourceName: USDresource # eg resourceTest",
"optional (though expected in all deployments) count: 0-N apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USDname namespace: openshift-sriov-network-operator spec: {} # USDspec eg #deviceType: netdevice #nicSelector: deviceID: \"1593\" pfNames: - ens8f0np0#0-9 rootDevices: - 0000:d8:00.0 vendor: \"8086\" #nodeSelector: kubernetes.io/hostname: host.sample.lab #numVfs: 20 #priority: 99 #excludeTopology: true #resourceName: resourceNameABCD",
"required count: 1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: node-role.kubernetes.io/worker: \"\" enableInjector: true enableOperatorWebhook: true",
"required: yes count: 1 apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: \"stable\" name: sriov-network-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic",
"required: yes count: 1 apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management",
"required: yes count: 1 apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator",
"Optional count: 1 apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - config: # Periodic is the default setting infoRefreshMode: Periodic machineConfigPoolSelector: matchLabels: # This label must match the pool(s) you want to run NUMA-aligned workloads pools.operator.machineconfiguration.openshift.io/worker: \"\"",
"required count: 1 apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: numaresources-operator namespace: openshift-numaresources spec: channel: \"4.14\" name: numaresources-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace",
"required: yes count: 1 apiVersion: v1 kind: Namespace metadata: name: openshift-numaresources annotations: workload.openshift.io/allowed: management",
"required: yes count: 1 apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: numaresources-operator namespace: openshift-numaresources spec: targetNamespaces: - openshift-numaresources",
"Optional count: 1 apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: #cacheResyncPeriod: \"0\" # Image spec should be the latest for the release imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-rhel9:v4.14.0\" #logLevel: \"Trace\" schedulerName: topo-aware-scheduler",
"optional count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 40-load-kernel-modules-control-plane spec: config: # Release info found in https://github.com/coreos/butane/releases ignition: version: 3.2.0 storage: files: - contents: source: data:, mode: 420 overwrite: true path: /etc/modprobe.d/kernel-blacklist.conf - contents: source: data:text/plain;charset=utf-8;base64,aXBfZ3JlCmlwNl90YWJsZXMKaXA2dF9SRUpFQ1QKaXA2dGFibGVfZmlsdGVyCmlwNnRhYmxlX21hbmdsZQppcHRhYmxlX2ZpbHRlcgppcHRhYmxlX21hbmdsZQppcHRhYmxlX25hdAp4dF9tdWx0aXBvcnQKeHRfb3duZXIKeHRfUkVESVJFQ1QKeHRfc3RhdGlzdGljCnh0X1RDUE1TUwp4dF91MzI= mode: 420 overwrite: true path: /etc/modules-load.d/kernel-load.conf",
"optional count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: load-sctp-module spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8;base64,c2N0cA== filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf",
"optional count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 40-load-kernel-modules-worker spec: config: # Release info found in https://github.com/coreos/butane/releases ignition: version: 3.2.0 storage: files: - contents: source: data:, mode: 420 overwrite: true path: /etc/modprobe.d/kernel-blacklist.conf - contents: source: data:text/plain;charset=utf-8;base64,aXBfZ3JlCmlwNl90YWJsZXMKaXA2dF9SRUpFQ1QKaXA2dGFibGVfZmlsdGVyCmlwNnRhYmxlX21hbmdsZQppcHRhYmxlX2ZpbHRlcgppcHRhYmxlX21hbmdsZQppcHRhYmxlX25hdAp4dF9tdWx0aXBvcnQKeHRfb3duZXIKeHRfUkVESVJFQ1QKeHRfc3RhdGlzdGljCnh0X1RDUE1TUwp4dF91MzI= mode: 420 overwrite: true path: /etc/modules-load.d/kernel-load.conf",
"required count: 1 apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - type: \"kafka\" name: kafka-open url: tcp://10.11.12.13:9092/test pipelines: - inputRefs: - infrastructure #- application - audit labels: label1: test1 label2: test2 label3: test3 label4: test4 label5: test5 name: all-to-default outputRefs: - kafka-open",
"required count: 1 apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: type: vector managementState: Managed",
"--- apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: workload.openshift.io/allowed: management",
"--- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging spec: targetNamespaces: - openshift-logging",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging spec: channel: \"stable\" name: cluster-logging source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic",
"required count: 1..N apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: redhat-operators-disconnected namespace: openshift-marketplace spec: displayName: Red Hat Disconnected Operators Catalog image: USDimageUrl publisher: Red Hat sourceType: grpc updateStrategy: registryPoll: interval: 1h #status: connectionState: lastObservedState: READY",
"required count: 1 apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: disconnected-internal-icsp spec: repositoryDigestMirrors: [] - USDmirrors",
"required count: 1 apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true",
"optional count: 1 --- apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | k8sPrometheusAdapter: dedicatedServiceMonitors: enabled: true prometheusK8s: retention: 15d volumeClaimTemplate: spec: storageClassName: ocs-external-storagecluster-ceph-rbd resources: requests: storage: 100Gi alertmanagerMain: volumeClaimTemplate: spec: storageClassName: ocs-external-storagecluster-ceph-rbd resources: requests: storage: 20Gi",
"required count: 1 apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: USDname annotations: # Some pods want the kernel stack to ignore IPv6 router Advertisement. kubeletconfig.experimental: | {\"allowedUnsafeSysctls\":[\"net.ipv6.conf.all.accept_ra\"]} spec: cpu: # node0 CPUs: 0-17,36-53 # node1 CPUs: 18-34,54-71 # siblings: (0,36), (1,37) # we want to reserve the first Core of each NUMA socket # # no CPU left behind! all-cpus == isolated + reserved isolated: USDisolated # eg 1-17,19-35,37-53,55-71 reserved: USDreserved # eg 0,18,36,54 # Guaranteed QoS pods will disable IRQ balancing for cores allocated to the pod. # default value of globallyDisableIrqLoadBalancing is false globallyDisableIrqLoadBalancing: false hugepages: defaultHugepagesSize: 1G pages: # 32GB per numa node - count: USDcount # eg 64 size: 1G machineConfigPoolSelector: # For SNO: machineconfiguration.openshift.io/role: 'master' pools.operator.machineconfiguration.openshift.io/worker: '' nodeSelector: # For SNO: node-role.kubernetes.io/master: \"\" node-role.kubernetes.io/worker: \"\" workloadHints: realTime: false highPowerConsumption: false perPodPowerManagement: true realTimeKernel: enabled: false numa: # All guaranteed QoS containers get resources from a single NUMA node topologyPolicy: \"single-numa-node\" net: userLevelNetworking: false"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/scalability_and_performance/reference-design-specifications |
Chapter 6. Supported components | Chapter 6. Supported components For a full list of component versions that are supported in this release of Red Hat JBoss Core Services, see the Core Services Apache HTTP Server Component Details page. Before you attempt to access the Component Details page, you must ensure that you have an active Red Hat subscription and you are logged in to the Red Hat Customer Portal. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_4_release_notes/supported_components |
Chapter 8. Upgrading the Migration Toolkit for Containers | Chapter 8. Upgrading the Migration Toolkit for Containers You can upgrade the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4.11 by using Operator Lifecycle Manager. You can upgrade MTC on OpenShift Container Platform 3 by reinstalling the legacy Migration Toolkit for Containers Operator. Important If you are upgrading from MTC version 1.3, you must perform an additional procedure to update the MigPlan custom resource (CR). 8.1. Upgrading the Migration Toolkit for Containers on OpenShift Container Platform 4.11 You can upgrade the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4.11 by using the Operator Lifecycle Manager. Important When upgrading the MTC by using the Operator Lifecycle Manager, you must use a supported migration path. Migration paths Migrating from OpenShift Container Platform 3 to OpenShift Container Platform 4 requires a legacy MTC Operator and MTC 1.7.x. Migrating from MTC 1.7.x to MTC 1.8.x is not supported. You must use MTC 1.7.x to migrate anything with a source of OpenShift Container Platform 4.9 or earlier. MTC 1.7.x must be used on both source and destination. MTC 1.8.x only supports migrations from OpenShift Container Platform 4.10 or later to OpenShift Container Platform 4.10 or later. For migrations only involving cluster versions 4.10 and later, either 1.7.x or 1.8.x may be used. However, it must be the same MTC version on both source & destination. Migration from source MTC 1.7.x to destination MTC 1.8.x is unsupported. Migration from source MTC 1.8.x to destination MTC 1.7.x is unsupported. Migration from source MTC 1.7.x to destination MTC 1.7.x is supported. Migration from source MTC 1.8.x to destination MTC 1.8.x is supported Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform console, navigate to Operators Installed Operators . Operators that have a pending upgrade display an Upgrade available status. Click Migration Toolkit for Containers Operator . Click the Subscription tab. Any upgrades requiring approval are displayed to Upgrade Status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for upgrade and click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the upgrade. When complete, the status changes to Succeeded and Up to date . Click Workloads Pods to verify that the MTC pods are running. 8.2. Upgrading the Migration Toolkit for Containers on OpenShift Container Platform 3 You can upgrade Migration Toolkit for Containers (MTC) on OpenShift Container Platform 3 by manually installing the legacy Migration Toolkit for Containers Operator. Prerequisites You must be logged in as a user with cluster-admin privileges. You must have access to registry.redhat.io . You must have podman installed. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials by entering the following command: USD podman login registry.redhat.io Download the operator.yml file by entering the following command: USD podman cp USD(podman create \ registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.8):/operator.yml ./ Replace the Migration Toolkit for Containers Operator by entering the following command: USD oc replace --force -f operator.yml Scale the migration-operator deployment to 0 to stop the deployment by entering the following command: USD oc scale -n openshift-migration --replicas=0 deployment/migration-operator Scale the migration-operator deployment to 1 to start the deployment and apply the changes by entering the following command: USD oc scale -n openshift-migration --replicas=1 deployment/migration-operator Verify that the migration-operator was upgraded by entering the following command: USD oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F ":" '{ print USDNF }' Download the controller.yml file by entering the following command: USD podman cp USD(podman create \ registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.8):/controller.yml ./ Create the migration-controller object by entering the following command: USD oc create -f controller.yml If you have previously added the OpenShift Container Platform 3 cluster to the MTC web console, you must update the service account token in the web console because the upgrade process deletes and restores the openshift-migration namespace: Obtain the service account token by entering the following command: USD oc sa get-token migration-controller -n openshift-migration In the MTC web console, click Clusters . Click the Options menu to the cluster and select Edit . Enter the new service account token in the Service account token field. Click Update cluster and then click Close . Verify that the MTC pods are running by entering the following command: USD oc get pods -n openshift-migration 8.3. Upgrading MTC 1.3 to 1.8 If you are upgrading Migration Toolkit for Containers (MTC) version 1.3.x to 1.8, you must update the MigPlan custom resource (CR) manifest on the cluster on which the MigrationController pod is running. Because the indirectImageMigration and indirectVolumeMigration parameters do not exist in MTC 1.3, their default value in version 1.4 is false , which means that direct image migration and direct volume migration are enabled. Because the direct migration requirements are not fulfilled, the migration plan cannot reach a Ready state unless these parameter values are changed to true . Important Migrating from OpenShift Container Platform 3 to OpenShift Container Platform 4 requires a legacy MTC Operator and MTC 1.7.x. Upgrading MTC 1.7.x to 1.8.x requires manually updating the OADP channel from stable-1.0 to stable-1.2 in order to successfully complete the upgrade from 1.7.x to 1.8.x. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Log in to the cluster on which the MigrationController pod is running. Get the MigPlan CR manifest: USD oc get migplan <migplan> -o yaml -n openshift-migration Update the following parameter values and save the file as migplan.yaml : ... spec: indirectImageMigration: true indirectVolumeMigration: true Replace the MigPlan CR manifest to apply the changes: USD oc replace -f migplan.yaml -n openshift-migration Get the updated MigPlan CR manifest to verify the changes: USD oc get migplan <migplan> -o yaml -n openshift-migration | [
"podman login registry.redhat.io",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.8):/operator.yml ./",
"oc replace --force -f operator.yml",
"oc scale -n openshift-migration --replicas=0 deployment/migration-operator",
"oc scale -n openshift-migration --replicas=1 deployment/migration-operator",
"oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F \":\" '{ print USDNF }'",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.8):/controller.yml ./",
"oc create -f controller.yml",
"oc sa get-token migration-controller -n openshift-migration",
"oc get pods -n openshift-migration",
"oc get migplan <migplan> -o yaml -n openshift-migration",
"spec: indirectImageMigration: true indirectVolumeMigration: true",
"oc replace -f migplan.yaml -n openshift-migration",
"oc get migplan <migplan> -o yaml -n openshift-migration"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/migrating_from_version_3_to_4/upgrading-3-4 |
Chapter 5. Frequently asked questions about Red Hat Ansible Certified Content | Chapter 5. Frequently asked questions about Red Hat Ansible Certified Content The following is a list of Frequently Asked Questions for the Red Hat Ansible Automation Platform Certification Program. If you have any questions regarding the following items, email [email protected] . 5.1. Why certify Ansible collections? The Ansible certification program enables a shared statement of support for Red Hat Ansible Certified Content between Red Hat and the ecosystem partner. An end customer, experiencing trouble with Ansible and certified partner content, can open a support ticket, for example, a request for information, or a problem with Red Hat, and expect the ticket to be resolved by Red Hat and the ecosystem partner. Red Hat offers go-to-market benefits for Certified Partners to grow market awareness, demand generation and collaborative selling. Red Hat Ansible Certified Content Collections are distributed through Ansible automation hub (subscription required), a centralized repository for jointly supported Ansible Content. As a certified partner, publishing collections to Ansible automation hub provides end customers the power to manage how trusted automation content is used in their production environment with a well-known support life cycle. For more information about getting started with certifying a solution, see Red Hat Partner Connect . 5.2. How do I get a collection certified? Refer to Red Hat Partner Connect for the Ansible certification policy guide to understand how to certify your collection. 5.3. What's the difference between Ansible Galaxy and Ansible automation hub? Collections published to Ansible Galaxy are the latest content published by the Ansible community and have no joint support claims associated. Ansible Galaxy is the recommended frontend directory for the Ansible community accessing all content. Collections published to Ansible automation hub are targeted for joint customers of Red Hat and selected partners. Customers need an Ansible subscription to access and download collections on Ansible automation hub. A certified collection means that Red Hat and partners have a strategic relationship in place and are ready to support joint customers, and may have had additional testing and validation done against them. 5.4. How do I request a namespace on Ansible Galaxy? After you request a namespace through an Ansible Galaxy GitHub issue, send an email to [email protected] You must provide us with the GitHub username that you used to sign up on Ansible Galaxy, and you must have logged in at least once for the system to validate. When users are added as administrators of the namespace, then additional administrators can be added by the self-serve process. 5.5. Are there any restrictions for Ansible Galaxy namespace naming? Collection namespaces must follow python module name convention. This means collections should have short, all lowercase names. You can use underscores in the collection name if it improves readability. 5.6. Are there any recommendations for collection naming? A general suggestion is to create a collection with company_name.product format. This way multiple products may have different collections under the company namespace. 5.7. How do I get a namespace on Ansible automation hub? By default namespaces used on Ansible Galaxy are also used on Ansible automation hub by the Ansible partner team. For any queries and clarifications contact [email protected] . 5.8. How do I run sanity tests on my collection? Ansible sanity tests are made up of scripts and tools used to perform static code analysis. The primary purpose of these tests is to enforce Ansible coding standards and requirements. Ansible collections must be in a specific path, such as the following example: {...}/ansible_collections/{namespace}/{collection}/ Ensure that your collection is in that specific path, and that you have three directories: An empty directory named ansible_collections A directory for the namespace A directory for the collection itself 5.9. Does Ansible Galaxy house the source code for my collection? No, Ansible Galaxy does not house the source for the collections. The actual collection source must be housed outside of Ansible Galaxy, for example, in GitHub. Ansible Galaxy contains the collection build tarball to publish the collection. You can include the link to the source for community users in the galaxy.yml file contained in the collection. This shows users where they should go if they want to contribute to the collection or even file issues against it. 5.10. Does Red Hat officially support collections downloaded and installed from Ansible Galaxy No, collections downloaded from Galaxy do not have any support claims associated and are 100% community supported. Users and contributors of any such collection must contact the collection developers directly. 5.11. How does the joint support agreement on certified collections work? If a customer raises an issue with the Red Hat support team about a certified collection, Red Hat support assesses the issue and checks whether the problem exists within Ansible or Ansible usage. They also check whether the issue is with a certified collection. If there is a problem with the certified collection, support teams transfer the issue to the vendor owner of the certified collection through an agreed upon tool such as TSANet. 5.12. Can I create and certify a collection containing only Ansible Roles? You can create and certify collections that contain only roles. Current testing requirements are focused on collections containing modules, and additional resources are currently in progress for testing collections only containing roles. Please contact [email protected] for more information. | [
"{...}/ansible_collections/{namespace}/{collection}/"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/managing_red_hat_certified_and_ansible_galaxy_collections_in_automation_hub/assembly-faq |
Chapter 3. Common deployment patterns | Chapter 3. Common deployment patterns Red Hat AMQ 7 can be set up in a large variety of topologies. The following are some of the common deployment patterns you can implement using AMQ components. 3.1. Central broker The central broker pattern is relatively easy to set up and maintain. It is also relatively robust. Routes are typically local, because the broker and its clients are always within one network hop of each other, no matter how many nodes are added. This pattern is also known as hub and spoke , with the central broker as the hub and the clients the spokes. Figure 3.1. Central broker pattern The only critical element is the central broker node. The focus of your maintenance efforts is on keeping this broker available to its clients. 3.2. Routed messaging When routing messages to remote destinations, the broker stores them in a local queue before forwarding them to their destination. However, sometimes an application requires sending request and response messages in real time, and having the broker store and forward messages is too costly. With AMQ you can use a router in place of a broker to avoid such costs. Unlike a broker, a router does not store messages before forwarding them to a destination. Instead, it works as a lightweight conduit and directly connects two endpoints. Figure 3.2. Brokerless routed messaging pattern 3.3. Highly available brokers To ensure brokers are available for their clients, deploy a highly available (HA) master-slave pair to create a backup group. You might, for example, deploy two master-slave groups on two nodes. Such a deployment would provide a backup for each active broker, as seen in the following diagram. Figure 3.3. Master-slave pair Under normal operating conditions one master broker is active on each node, which can be either a physical server or a virtual machine. If one node fails, the slave on the other node takes over. The result is two active brokers residing on the same healthy node. By deploying master-slave pairs, you can scale out an entire network of such backup groups. Larger deployments of this type are useful for distributing the message processing load across many brokers. The broker network in the following diagram consists of eight master-slave groups distributed over eight nodes. Figure 3.4. Master-slave network 3.4. Router pair behind a load balancer Deploying two routers behind a load balancer provides high availability, resiliency, and increased scalability for a single-datacenter deployment. Endpoints make their connections to a known URL, supported by the load balancer. , the load balancer spreads the incoming connections among the routers so that the connection and messaging load is distributed. If one of the routers fails, the endpoints connected to it will reconnect to the remaining active router. Figure 3.5. Router pair behind a load balancer For even greater scalability, you can use a larger number of routers, three or four for example. Each router connects directly to all of the others. 3.5. Router pair in a DMZ In this deployment architecture, the router network is providing a layer of protection and isolation between the clients in the outside world and the brokers backing an enterprise application. Figure 3.6. Router pair in a DMZ Important notes about the DMZ topology: Security for the connections within the deployment is separate from the security used for external clients. For example, your deployment might use a private Certificate Authority (CA) for internal security, issuing x.509 certificates to each router and broker for authentication, although external users might use a different, public CA. Inter-router connections between the enterprise and the DMZ are always established from the enterprise to the DMZ for security. Therefore, no connections are permitted from the outside into the enterprise. The AMQP protocol enables bi-directional communication after a connection is established, however. 3.6. Router pairs in different data centers You can use a more complex topology in a deployment of AMQ components that spans multiple locations. You can, for example, deploy a pair of load-balanced routers in each of four locations. You might include two backbone routers in the center to provide redundant connectivity between all locations. The following diagram is an example deployment spanning multiple locations. Figure 3.7. Multiple interconnected routers Revised on 2020-12-03 08:48:46 UTC | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/introducing_red_hat_amq_7/common_deployment_patterns |
function::local_clock_ms | function::local_clock_ms Name function::local_clock_ms - Number of milliseconds on the local cpu's clock Synopsis Arguments None Description This function returns the number of milliseconds on the local cpu's clock. This is always monotonic comparing on the same cpu, but may have some drift between cpus (within about a jiffy). | [
"local_clock_ms:long()"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-local-clock-ms |
Chapter 1. RBAC APIs | Chapter 1. RBAC APIs 1.1. ClusterRoleBinding [rbac.authorization.k8s.io/v1] Description ClusterRoleBinding references a ClusterRole, but not contain it. It can reference a ClusterRole in the global namespace, and adds who information via Subject. Type object 1.2. ClusterRole [rbac.authorization.k8s.io/v1] Description ClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding. Type object 1.3. RoleBinding [rbac.authorization.k8s.io/v1] Description RoleBinding references a role, but does not contain it. It can reference a Role in the same namespace or a ClusterRole in the global namespace. It adds who information via Subjects and namespace information by which namespace it exists in. RoleBindings in a given namespace only have effect in that namespace. Type object 1.4. Role [rbac.authorization.k8s.io/v1] Description Role is a namespaced, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding. Type object | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/rbac_apis/rbac-apis |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/using_amq_streams_on_openshift/making-open-source-more-inclusive |
Chapter 1. Support policy for Red Hat build of OpenJDK | Chapter 1. Support policy for Red Hat build of OpenJDK Red Hat will support select major versions of Red Hat build of OpenJDK in its products. For consistency, these are the same versions that Oracle designates as long-term support (LTS) for the Oracle JDK. A major version of Red Hat build of OpenJDK will be supported for a minimum of six years from the time that version is first introduced. For more information, see the OpenJDK Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Red Hat build of OpenJDK is not supporting RHEL 6 as a supported configuration. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.20/rn-openjdk-support-policy |
5.9. Multipath Command Options | 5.9. Multipath Command Options Table 5.1, "Useful multipath Command Options" describes some options of the multipath command that you may find useful. Table 5.1. Useful multipath Command Options Option Description -l Display the current multipath configuration gathered from sysfs and the device mapper. -ll Display the current multipath configuration gathered from sysfs , the device mapper, and all other available components on the system. -f device Remove the named multipath device. -F Remove all unused multipath devices. -w device Remove the wwid of the specified device from the wwids file. -W Reset the wwids file to include only the current multipath devices. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/dm_multipath/multipath_options |
4.10. Configuring Automated Unlocking of Encrypted Volumes using Policy-Based Decryption | 4.10. Configuring Automated Unlocking of Encrypted Volumes using Policy-Based Decryption The Policy-Based Decryption (PBD) is a collection of technologies that enable unlocking encrypted root and secondary volumes of hard drives on physical and virtual machines using different methods like a user password, a Trusted Platform Module (TPM) device, a PKCS#11 device connected to a system, for example, a smart card, or with the help of a special network server. The PBD as a technology allows combining different unlocking methods into a policy creating an ability to unlock the same volume in different ways. The current implementation of the PBD in Red Hat Enterprise Linux consists of the Clevis framework and plugins called pins. Each pin provides a separate unlocking capability. For now, the only two pins available are the ones that allow volumes to be unlocked with TPM or with a network server. The Network Bound Disc Encryption (NBDE) is a subcategory of the PBD technologies that allows binding the encrypted volumes to a special network server. The current implementation of the NBDE includes Clevis pin for Tang server and the Tang server itself. 4.10.1. Network-Bound Disk Encryption The Network-Bound Disk Encryption (NBDE) allows the user to encrypt root volumes of hard drives on physical and virtual machines without requiring to manually enter a password when systems are restarted. In Red Hat Enterprise Linux 7, NBDE is implemented through the following components and technologies: Figure 4.2. The Network-Bound Disk Encryption using Clevis and Tang Tang is a server for binding data to network presence. It makes a system containing your data available when the system is bound to a certain secure network. Tang is stateless and does not require TLS or authentication. Unlike escrow-based solutions, where the server stores all encryption keys and has knowledge of every key ever used, Tang never interacts with any client keys, so it never gains any identifying information from the client. Clevis is a pluggable framework for automated decryption. In NBDE, Clevis provides automated unlocking of LUKS volumes. The clevis package provides the client side of the feature. A Clevis pin is a plug-in into the Clevis framework. One of such pins is a plug-in that implements interactions with the NBDE server - Tang. Clevis and Tang are generic client and server components that provide network-bound encryption. In Red Hat Enterprise Linux 7, they are used in conjunction with LUKS to encrypt and decrypt root and non-root storage volumes to accomplish Network-Bound Disk Encryption. Both client- and server-side components use the Jose library to perform encryption and decryption operations. When you begin provisioning NBDE, the Clevis pin for Tang server gets a list of the Tang server's advertised asymmetric keys. Alternatively, since the keys are asymmetric, a list of Tang's public keys can be distributed out of band so that clients can operate without access to the Tang server. This mode is called offline provisioning . The Clevis pin for Tang uses one of the public keys to generate a unique, cryptographically-strong encryption key. Once the data is encrypted using this key, the key is discarded. The Clevis client should store the state produced by this provisioning operation in a convenient location. This process of encrypting data is the provisioning step . The provisioning state for NBDE is stored in the LUKS header leveraging the luksmeta package. When the client is ready to access its data, it loads the metadata produced in the provisioning step and it responds to recover the encryption key. This process is the recovery step . In NBDE, Clevis binds a LUKS volume using a pin so that it can be automatically unlocked. After successful completion of the binding process, the disk can be unlocked using the provided Dracut unlocker. All LUKS-encrypted devices, such as those with the /tmp , /var , and /usr/local/ directories, that contain a file system requiring to start before the network connection is established are considered to be root volumes . Additionally, all mount points that are used by services run before the network is up, such as /var/log/ , var/log/audit/ , or /opt , also require to be mounted early after switching to a root device. You can also identify a root volume by not having the _netdev option in the /etc/fstab file. 4.10.2. Installing an Encryption Client - Clevis To install the Clevis pluggable framework and its pins on a machine with an encrypted volume (client), enter the following command as root : To decrypt data, use the clevis decrypt command and provide the cipher text (JWE): For more information, see the built-in CLI help: 4.10.3. Deploying a Tang Server with SELinux in Enforcing Mode Red Hat Enterprise Linux 7.7 and newer provides the tangd_port_t SELinux type, and a Tang server can be deployed as a confined service in SELinux enforcing mode. Prerequisites The policycoreutils-python-utils package and its dependencies are installed. Procedure To install the tang package and its dependencies, enter the following command as root : Pick an unoccupied port, for example, 7500/tcp , and allow the tangd service to bind to that port: Note that a port can be used only by one service at a time, and thus an attempt to use an already occupied port implies the ValueError: Port already defined error message. Open the port in the firewall: Enable the tangd service using systemd: Create an override file: In the following editor screen, which opens an empty override.conf file located in the /etc/systemd/system/tangd.socket.d/ directory, change the default port for the Tang server from 80 to the previously picked number by adding the following lines: Save the file and exit the editor. Reload the changed configuration and start the tangd service: Check that your configuration is working: Start the tangd service: Because tangd uses the systemd socket activation mechanism, the server starts as soon as the first connection comes in. A new set of cryptographic keys is automatically generated at the first start. To perform cryptographic operations such as manual key generation, use the jose utility. Enter the jose -h command or see the jose(1) man pages for more information. Example 4.4. Rotating Tang Keys It is important to periodically rotate your keys. The precise interval at which you should rotate them depends upon your application, key sizes, and institutional policy. For some common recommendations, see the Cryptographic Key Length Recommendation page. To rotate keys, start with the generation of new keys in the key database directory, typically /var/db/tang . For example, you can create new signature and exchange keys with the following commands: Rename the old keys to have a leading . to hide them from advertisement. Note that the file names in the following example differs from real and unique file names in the key database directory. Tang immediately picks up all changes. No restart is required. At this point, new client bindings pick up the new keys and old clients can continue to utilize the old keys. When you are sure that all old clients use the new keys, you can remove the old keys. Warning Be aware that removing the old keys while clients are still using them can result in data loss. 4.10.3.1. Deploying High-Availability Systems Tang provides two methods for building a high-availability deployment: Client Redundancy (Recommended) Clients should be configured with the ability to bind to multiple Tang servers. In this setup, each Tang server has its own keys and clients are able to decrypt by contacting a subset of these servers. Clevis already supports this workflow through its sss plug-in. For more information about this setup, see the following man pages: tang(8) , section High Availability clevis(1) , section Shamir's Secret Sharing clevis-encrypt-sss(1) Red Hat recommends this method for a high-availability deployment. Key Sharing For redundancy purposes, more than one instance of Tang can be deployed. To set up a second or any subsequent instance, install the tang packages and copy the key directory to the new host using rsync over SSH. Note that Red Hat does not recommend this method because sharing keys increases the risk of key compromise and requires additional automation infrastructure. 4.10.4. Deploying an Encryption Client for an NBDE system with Tang Prerequisites The Clevis framework is installed. See Section 4.10.2, "Installing an Encryption Client - Clevis" A Tang server or its downloaded advertisement is available. See Section 4.10.3, "Deploying a Tang Server with SELinux in Enforcing Mode" Procedure To bind a Clevis encryption client to a Tang server, use the clevis encrypt tang sub-command: Change the http://tang.srv URL in the example to match the URL of the server where tang is installed. The JWE output file contains your encrypted cipher text. This cipher text is read from the PLAINTEXT input file. To decrypt data, use the clevis decrypt command and provide the cipher text (JWE): For more information, see the clevis-encrypt-tang(1) man page or use the built-in CLI help: 4.10.5. Deploying an Encryption Client with a TPM 2.0 Policy On systems with the 64-bit Intel or 64-bit AMD architecture, to deploy a client that encrypts using a Trusted Platform Module 2.0 (TPM 2.0) chip, use the clevis encrypt tpm2 sub-command with the only argument in form of the JSON configuration object: To choose a different hierarchy, hash, and key algorithms, specify configuration properties, for example: To decrypt the data, provide the ciphertext (JWE): The pin also supports sealing data to a Platform Configuration Registers (PCR) state. That way the data can only be unsealed if the PCRs hashes values match the policy used when sealing. For example, to seal the data to the PCR with index 0 and 1 for the SHA1 bank: For more information and the list of possible configuration properties, see the clevis-encrypt-tpm2(1) man page. 4.10.6. Configuring Manual Enrollment of Root Volumes To automatically unlock an existing LUKS-encrypted root volume, install the clevis-luks subpackage and bind the volume to a Tang server using the clevis luks bind command: This command performs four steps: Creates a new key with the same entropy as the LUKS master key. Encrypts the new key with Clevis. Stores the Clevis JWE object in the LUKS header with LUKSMeta. Enables the new key for use with LUKS. This disk can now be unlocked with your existing password as well as with the Clevis policy. For more information, see the clevis-luks-bind(1) man page. Note The binding procedure assumes that there is at least one free LUKS password slot. The clevis luks bind command takes one of the slots. To verify that the Clevis JWE object is successfully placed in a LUKS header, use the luksmeta show command: To enable the early boot system to process the disk binding, enter the following commands on an already installed system: Important To use NBDE for clients with static IP configuration (without DHCP), pass your network configuration to the dracut tool manually, for example: Alternatively, create a .conf file in the /etc/dracut.conf.d/ directory with the static network information. For example: Regenerate the initial RAM disk image: See the dracut.cmdline(7) man page for more information. 4.10.7. Configuring Automated Enrollment Using Kickstart Clevis can integrate with Kickstart to provide a fully automated enrollment process. Instruct Kickstart to partition the disk such that LUKS encryption has enabled for all mount points, other than /boot , with a temporary password. The password is temporary for this step of the enrollment process. Note that OSPP-complaint systems require a more complex configuration, for example: Install the related Clevis packages by listing them in the %packages section: Call clevis luks bind to perform binding in the %post section. Afterward, remove the temporary password: In the above example, note that we specify the thumbprint that we trust on the Tang server as part of our binding configuration, enabling binding to be completely non-interactive. You can use an analogous procedure when using a TPM 2.0 policy instead of a Tang server. For more information on Kickstart installations, see the Red Hat Enterprise Linux 7 Installation Guide . For information on Linux Unified Key Setup-on-disk-format (LUKS), see Section 4.9.1, "Using LUKS Disk Encryption" . 4.10.8. Configuring Automated Unlocking of Removable Storage Devices To automatically unlock a LUKS-encrypted removable storage device, such as a USB drive, install the clevis-udisks2 package: Reboot the system, and then perform the binding step using the clevis luks bind command as described in Section 4.10.6, "Configuring Manual Enrollment of Root Volumes" , for example: The LUKS-encrypted removable device can be now unlocked automatically in your GNOME desktop session. The device bound to a Clevis policy can be also unlocked by the clevis luks unlock command: You can use an analogous procedure when using a TPM 2.0 policy instead of a Tang server. 4.10.9. Configuring Automated Unlocking of Non-root Volumes at Boot Time To use NBDE to also unlock LUKS-encrypted non-root volumes, perform the following steps: Install the clevis-systemd package: Enable the Clevis unlocker service: Perform the binding step using the clevis luks bind command as described in Section 4.10.6, "Configuring Manual Enrollment of Root Volumes" . To set up the encrypted block device during system boot, add the corresponding line with the _netdev option to the /etc/crypttab configuration file. See the crypttab(5) man page for more information. Add the volume to the list of accessible filesystems in the /etc/fstab file. Use the _netdev option in this configuration file, too. See the fstab(5) man page for more information. 4.10.10. Deploying Virtual Machines in a NBDE Network The clevis luks bind command does not change the LUKS master key. This implies that if you create a LUKS-encrypted image for use in a virtual machine or cloud environment, all the instances that run this image will share a master key. This is extremely insecure and should be avoided at all times. This is not a limitation of Clevis but a design principle of LUKS. If you wish to have encrypted root volumes in a cloud, you need to make sure that you perform the installation process (usually using Kickstart) for each instance of Red Hat Enterprise Linux in a cloud as well. The images cannot be shared without also sharing a LUKS master key. If you intend to deploy automated unlocking in a virtualized environment, Red Hat strongly recommends that you use systems such as lorax or virt-install together with a Kickstart file (see Section 4.10.7, "Configuring Automated Enrollment Using Kickstart" ) or another automated provisioning tool to ensure that each encrypted VM has a unique master key. 4.10.11. Building Automatically-enrollable VM Images for Cloud Environments using NBDE Deploying automatically-enrollable encrypted images in a cloud environment can provide a unique set of challenges. Like other virtualization environments, it is recommended to reduce the number of instances started from a single image to avoid sharing the LUKS master key. Therefore, the best practice is to create customized images that are not shared in any public repository and that provide a base for the deployment of a limited amount of instances. The exact number of instances to create should be defined by deployment's security policies and based on the risk tolerance associated with the LUKS master key attack vector. To build LUKS-enabled automated deployments, systems such as Lorax or virt-install together with a Kickstart file should be used to ensure master key uniqueness during the image building process. Cloud environments enable two Tang server deployment options which we consider here. First, the Tang server can be deployed within the cloud environment itself. Second, the Tang server can be deployed outside of the cloud on independent infrastructure with a VPN link between the two infrastructures. Deploying Tang natively in the cloud does allow for easy deployment. However, given that it shares infrastructure with the data persistence layer of ciphertext of other systems, it may be possible for both the Tang server's private key and the Clevis metadata to be stored on the same physical disk. Access to this physical disk permits a full compromise of the ciphertext data. Important For this reason, Red Hat strongly recommends maintaining a physical separation between the location where the data is stored and the system where Tang is running. This separation between the cloud and the Tang server ensures that the Tang server's private key cannot be accidentally combined with the Clevis metadata. It also provides local control of the Tang server if the cloud infrastructure is at risk. 4.10.12. Additional Resources The How to set up Network Bound Disk Encryption with multiple LUKS devices (Clevis+Tang unlocking) Knowledgebase article. For more information, see the following man pages: tang(8) clevis(1) jose(1) clevis-luks-unlockers(1) tang-nagios(1) | [
"~]# yum install clevis",
"~]USD clevis decrypt < JWE > PLAINTEXT",
"~]USD clevis Usage: clevis COMMAND [OPTIONS] clevis decrypt Decrypts using the policy defined at encryption time clevis encrypt http Encrypts using a REST HTTP escrow server policy clevis encrypt sss Encrypts using a Shamir's Secret Sharing policy clevis encrypt tang Encrypts using a Tang binding server policy clevis encrypt tpm2 Encrypts using a TPM2.0 chip binding policy ~]USD clevis decrypt Usage: clevis decrypt < JWE > PLAINTEXT Decrypts using the policy defined at encryption time ~]USD clevis encrypt tang Usage: clevis encrypt tang CONFIG < PLAINTEXT > JWE Encrypts using a Tang binding server policy This command uses the following configuration properties: url: <string> The base URL of the Tang server (REQUIRED) thp: <string> The thumbprint of a trusted signing key adv: <string> A filename containing a trusted advertisement adv: <object> A trusted advertisement (raw JSON) Obtaining the thumbprint of a trusted signing key is easy. If you have access to the Tang server's database directory, simply do: USD jose jwk thp -i USDDBDIR/USDSIG.jwk Alternatively, if you have certainty that your network connection is not compromised (not likely), you can download the advertisement yourself using: USD curl -f USDURL/adv > adv.jws",
"~]# yum install tang",
"~]# semanage port -a -t tangd_port_t -p tcp 7500",
"~]# firewall-cmd --add-port= 7500/tcp ~]# firewall-cmd --runtime-to-permanent",
"~]# systemctl enable tangd.socket Created symlink from /etc/systemd/system/multi-user.target.wants/tangd.socket to /usr/lib/systemd/system/tangd.socket.",
"~]# systemctl edit tangd.socket",
"[Socket] ListenStream= ListenStream= 7500",
"~]# systemctl daemon-reload",
"~]# systemctl show tangd.socket -p Listen Listen=[::]:7500 (Stream)",
"~]# systemctl start tangd.socket",
"~]# DB=/var/db/tang ~]# jose jwk gen -i '{\"alg\":\"ES512\"}' -o USDDB/new_sig.jwk ~]# jose jwk gen -i '{\"alg\":\"ECMR\"}' -o USDDB/new_exc.jwk",
"~]# mv USDDB/old_sig.jwk USDDB/.old_sig.jwk ~]# mv USDDB/old_exc.jwk USDDB/.old_exc.jwk",
"~]USD clevis encrypt tang '{\"url\":\" http://tang.srv \"}' < PLAINTEXT > JWE The advertisement contains the following signing keys: _OsIk0T-E2l6qjfdDiwVmidoZjA Do you wish to trust these keys? [ynYN] y",
"~]USD clevis decrypt < JWE > PLAINTEXT",
"~]USD clevis Usage: clevis COMMAND [OPTIONS] clevis decrypt Decrypts using the policy defined at encryption time clevis encrypt http Encrypts using a REST HTTP escrow server policy clevis encrypt sss Encrypts using a Shamir's Secret Sharing policy clevis encrypt tang Encrypts using a Tang binding server policy clevis luks bind Binds a LUKSv1 device using the specified policy clevis luks unlock Unlocks a LUKSv1 volume ~]USD clevis decrypt Usage: clevis decrypt < JWE > PLAINTEXT Decrypts using the policy defined at encryption time ~]USD clevis encrypt tang Usage: clevis encrypt tang CONFIG < PLAINTEXT > JWE Encrypts using a Tang binding server policy This command uses the following configuration properties: url: <string> The base URL of the Tang server (REQUIRED) thp: <string> The thumbprint of a trusted signing key adv: <string> A filename containing a trusted advertisement adv: <object> A trusted advertisement (raw JSON) Obtaining the thumbprint of a trusted signing key is easy. If you have access to the Tang server's database directory, simply do: USD jose jwk thp -i USDDBDIR/USDSIG.jwk Alternatively, if you have certainty that your network connection is not compromised (not likely), you can download the advertisement yourself using: USD curl -f USDURL/adv > adv.jws",
"~]USD clevis encrypt tpm2 '{}' < PLAINTEXT > JWE",
"~]USD clevis encrypt tpm2 '{\"hash\":\"sha1\",\"key\":\"rsa\"}' < PLAINTEXT > JWE",
"~]USD clevis decrypt < JWE > PLAINTEXT",
"~]USD clevis encrypt tpm2 '{\"pcr_bank\":\"sha1\",\"pcr_ids\":\"0,1\"}' < PLAINTEXT > JWE",
"~]# yum install clevis-luks",
"~]# clevis luks bind -d /dev/sda tang '{\"url\":\" http://tang.srv \"}' The advertisement contains the following signing keys: _OsIk0T-E2l6qjfdDiwVmidoZjA Do you wish to trust these keys? [ynYN] y You are about to initialize a LUKS device for metadata storage. Attempting to initialize it may result in data loss if data was already written into the LUKS header gap in a different format. A backup is advised before initialization is performed. Do you wish to initialize /dev/sda? [yn] y Enter existing LUKS password:",
"~]# luksmeta show -d /dev/sda 0 active empty 1 active cb6e8904-81ff-40da-a84a-07ab9ab5715e 2 inactive empty 3 inactive empty 4 inactive empty 5 inactive empty 6 inactive empty 7 inactive empty",
"~]# yum install clevis-dracut ~]# dracut -f --regenerate-all",
"~]# dracut -f --regenerate-all --kernel-cmdline \"ip= 192.0.2.10 netmask= 255.255.255.0 gateway= 192.0.2.1 nameserver= 192.0.2.45 \"",
"~]# cat /etc/dracut.conf.d/static_ip.conf kernel_cmdline=\"ip=10.0.0.103 netmask=255.255.252.0 gateway=10.0.0.1 nameserver=10.0.0.1\"",
"~]# dracut -f --regenerate-all",
"part /boot --fstype=\"xfs\" --ondisk=vda --size=256 part / --fstype=\"xfs\" --ondisk=vda --grow --encrypted --passphrase=temppass",
"part /boot --fstype=\"xfs\" --ondisk=vda --size=256 part / --fstype=\"xfs\" --ondisk=vda --size=2048 --encrypted --passphrase=temppass part /var --fstype=\"xfs\" --ondisk=vda --size=1024 --encrypted --passphrase=temppass part /tmp --fstype=\"xfs\" --ondisk=vda --size=1024 --encrypted --passphrase=temppass part /home --fstype=\"xfs\" --ondisk=vda --size=2048 --grow --encrypted --passphrase=temppass part /var/log --fstype=\"xfs\" --ondisk=vda --size=1024 --encrypted --passphrase=temppass part /var/log/audit --fstype=\"xfs\" --ondisk=vda --size=1024 --encrypted --passphrase=temppass",
"%packages clevis-dracut %end",
"%post clevis luks bind -f -k- -d /dev/vda2 tang '{\"url\":\"http://tang.srv\",\"thp\":\"_OsIk0T-E2l6qjfdDiwVmidoZjA\"}' \\ <<< \"temppass\" cryptsetup luksRemoveKey /dev/vda2 <<< \"temppass\" %end",
"~]# yum install clevis-udisks2",
"~]# clevis luks bind -d /dev/sdb1 tang '{\"url\":\" http://tang.srv \"}'",
"~]# clevis luks unlock -d /dev/sdb1",
"~]# yum install clevis-systemd",
"~]# systemctl enable clevis-luks-askpass.path Created symlink from /etc/systemd/system/remote-fs.target.wants/clevis-luks-askpass.path to /usr/lib/systemd/system/clevis-luks-askpass.path."
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-Policy-Based_Decryption |
Chapter 4. ResourceAccessReview [authorization.openshift.io/v1] | Chapter 4. ResourceAccessReview [authorization.openshift.io/v1] Description ResourceAccessReview is a means to request a list of which users and groups are authorized to perform the action specified by spec Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required namespace verb resourceAPIGroup resourceAPIVersion resource resourceName path isNonResourceURL 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources content RawExtension Content is the actual content of the request for create and update isNonResourceURL boolean IsNonResourceURL is true if this is a request for a non-resource URL (outside of the resource hierarchy) kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces path string Path is the path of a non resource URL resource string Resource is one of the existing resource types resourceAPIGroup string Group is the API group of the resource Serialized as resourceAPIGroup to avoid confusion with the 'groups' field when inlined resourceAPIVersion string Version is the API version of the resource Serialized as resourceAPIVersion to avoid confusion with TypeMeta.apiVersion and ObjectMeta.resourceVersion when inlined resourceName string ResourceName is the name of the resource being requested for a "get" or deleted for a "delete" verb string Verb is one of: get, list, watch, create, update, delete 4.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/resourceaccessreviews POST : create a ResourceAccessReview 4.2.1. /apis/authorization.openshift.io/v1/resourceaccessreviews Table 4.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a ResourceAccessReview Table 4.2. Body parameters Parameter Type Description body ResourceAccessReview schema Table 4.3. HTTP responses HTTP code Reponse body 200 - OK ResourceAccessReview schema 201 - Created ResourceAccessReview schema 202 - Accepted ResourceAccessReview schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/authorization_apis/resourceaccessreview-authorization-openshift-io-v1 |
3.5. Displaying Status | 3.5. Displaying Status You can display the status of the cluster and the cluster resources with the following command. If you do not specify a commands parameter, this command displays all information about the cluster and the resources. You display the status of only particular cluster components by specifying resources , groups , cluster , nodes , or pcsd . | [
"pcs status commands"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-pcsstatus-haar |
Chapter 13. Authentication and Interoperability | Chapter 13. Authentication and Interoperability Improved Identity Management Cross-Realm Trusts to Active Directory The following improvements have been implemented in cross-realm trusts to Active Directory feature of Red Hat Enterprise Linux: Multiple Active Directory domains are supported in the trusted forest; Access of users belonging to separate Active Directory domains in the trusted forest can be selectively disabled and enabled per-domain level; Manually defined POSIX identifiers for users and groups from trusted Active Directory domains can be used instead of automatically assigned identifiers; Active Directory users and groups coming from the trusted domains can be exported to legacy POSIX systems through LDAP compatibility tree; For Active Directory users exported through LDAP compatibility tree, authentication can be performed against Identity Management LDAP server. As a result, both Identity Management and trusted Active Directory users are accessible to legacy POSIX systems in a unified way. Support of POSIX User and Group IDs In Active Directory Identity Management implementation of cross-realm trusts to Active Directory supports existing POSIX user and group ID attributes in Active Directory. When explicit mappings are not defined on the Active Directory side, algorithmic mapping based on the user or group Security Identifier (SID) is applied. Use of AD and LDAP sudo Providers The AD provider is a back end used to connect to an Active Directory server. In Red Hat Enterprise Linux 7, using the AD sudo provider together with the LDAP provider is supported as a Technology Preview. To enable the AD sudo provider, add the sudo_provider=ad setting in the domain section of the sssd.conf file. Support of CA-Less Installations IPA supports installing without an embedded Certificate Authority with user-provided SSL certificates for the HTTP servers and Directory Servers. The administrator is responsible for issuing and rotating services and hosts certificates manually. FreeIPA GUI Improvements Red Hat Enterprise Linux 7 brings a number of improvements to FreeIPA graphical interface, from which the most notable are the following: All dialog windows can be confirmed by the Enter key even when the appropriate button or the dialog window does not have the focus; Loading of web UI is significantly faster because of compression of web UI assets and RPC communication; Drop-down lists can be controlled by keyboard. Reclaiming IDs of Deleted Replicas User and group ID ranges that belong to deleted replicas can be transferred to a suitable replica if one exists. This prevents potential exhaustion of the ID space. Additionally, ID ranges can be managed manually with the ipa-replica-manage tool. Re-Enrolling Clients Using Existing Keytab Files A host that has been recreated and does not have its host entry disabled or removed can be re-enrolled using a previously backed up keytab file. This ensures easy re-enrolling of the IPA client system after the user rebuilds it. Prompt for DNS During server interactive installation, the user is asked whether to install the DNS component. Previously, the DNS feature was installed only when the --setup-dns option was passed to the installer, leading to users not being aware of the feature. Enhanced SSHFP DNS Records DNS support in Identity Management was extended with support for the RFC 6954 standard. This allows users to publish Elliptic Curve Digital Signature Algorithm (ECDSA) keys and SHA-256 hashes in SSH fingerprint (SSHFP) records. Filtering Groups by Type New flags, --posix , --nonposix , --external , can be used to filter groups by type: POSIX group is a group with the posixGroup object class; Non-POSIX group is a group which is not POSIX or external, which means the group does not have the posixGroup or ipaExternalGroup object class; External group is a group with the ipaExternalGroup class. Improved Integration with the External Provisioning Systems External provisioning systems often require extra data to correctly process hosts. A new free-form text field, class has been added to the host entries. This field can be used in automatic membership rules. CRL and OCSP DNS Name in Certificate Profiles A round-robin DNS name for the IPA Certificate Authority (CA) now points to all active IPA CA masters. The name is used for CRL and OCSP URIs in the IPA certificate profile. When any of the IPA CA masters is removed or unavailable, it does not affect the ability to check revocation status of any of the certificates issued by the IPA CA. Certificates Search The cert-find command no longer restricts users to searching certificates only by their serial number, but now also by: serial number range; subject name; validity period; revocation status; and issue date. Marking Kerberos Service as Trusted for Delegation of User Keys Individual Identity Management services can be marked to Identity Management tools as trusted for delegation. By checking the ok_as_delegate flag, Microsoft Windows clients can determine whether the user credentials can be forwarded or delegated to a specific server or not. Samba 4.1.0 Red Hat Enterprise Linux 7 includes samba packages upgraded to the latest upstream version, which introduce several bug fixes and enhancements, the most notable of which is support for the SMB3 protocol in the server and client tools. Additionally, SMB3 transport enables encrypted transport connections to Windows servers that support SMB3, as well as Samba servers. Also, Samba 4.1.0 adds support for server-side copy operations. Clients making use of server-side copy support, such as the latest Windows releases, should experience considerable performance improvements for file copy operations. Note that using the Linux kernel CIFS module with SMB protocol 3.1.1 is currently experimental and the functionality is unavailable in kernels provided by Red Hat. Warning The updated samba packages remove several already deprecated configuration options. The most important are the server roles security = share and security = server . Also the web configuration tool SWAT has been completely removed. More details can be found in the Samba 4.0 and 4.1 release notes: https://www.samba.org/samba/history/samba-4.0.0.html https://www.samba.org/samba/history/samba-4.1.0.html Note that several tdb files have been updated. This means that all tdb files are upgraded as soon as you start the new version of the smbd daemon. You cannot downgrade to an older Samba version unless you have backups of the tdb files. For more information about these changes, refer to the Release Notes for Samba 4.0 and 4.1 mentioned above. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/chap-red_hat_enterprise_linux-7.0_release_notes-authentication_and_interoperability |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.