title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
βŒ€
url
stringlengths
79
342
Chapter 7. Conclusion
Chapter 7. Conclusion Congratulations. In this tutorial, you learned how to incorporate data science, artificial intelligence, and machine learning into an OpenShift development workflow. You used an example fraud detection model and completed the following tasks: Explored a pre-trained fraud detection model by using a Jupyter notebook. Deployed the model by using OpenShift AI model serving. Refined and trained the model by using automated pipelines. Learned how to train the model by using Ray, a distributed computing framework.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/openshift_ai_tutorial_-_fraud_detection_example/conclusion-tutorial
Chapter 5. Gathering data about your cluster
Chapter 5. Gathering data about your cluster When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. It is recommended to provide: Data gathered using the oc adm must-gather command The unique cluster ID 5.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, including: Resource definitions Service logs By default, the oc adm must-gather command uses the default plugin image and writes into ./must-gather.local . Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections: To collect data related to one or more specific features, use the --image argument with an image, as listed in a following section. For example: USD oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.12.0 To collect the audit logs, use the -- /usr/bin/gather_audit_logs argument, as described in a following section. For example: USD oc adm must-gather -- /usr/bin/gather_audit_logs Note Audit logs are not collected as part of the default set of information to reduce the size of the files. When you run oc adm must-gather , a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local . This directory is created in the current working directory. For example: NAMESPACE NAME READY STATUS RESTARTS AGE ... openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s ... 5.1.1. Gathering data about your cluster for Red Hat Support You can gather debugging information about your cluster by using the oc adm must-gather CLI command. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift Container Platform CLI ( oc ) installed. Procedure Navigate to the directory where you want to store the must-gather data. Note If your cluster is in a disconnected environment, you must take additional steps. If your mirror registry has a trusted CA, you must first add the trusted CA to the cluster. For all clusters in disconnected environments, you must import the default must-gather image as an image stream. USD oc import-image is/must-gather -n openshift Run the oc adm must-gather command: USD oc adm must-gather Important If you are in a disconnected environment, use the --image flag as part of must-gather and point to the payload image. Note Because this command picks a random control plane node by default, the pod might be scheduled to a control plane node that is in the NotReady and SchedulingDisabled state. If this command fails, for example, if you cannot schedule a pod on your cluster, then use the oc adm inspect command to gather information for particular resources. Note Contact Red Hat Support for the recommended resources to gather. Create a compressed file from the must-gather directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1 1 Make sure to replace must-gather-local.5421342344627712289/ with the actual directory name. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal. 5.1.2. Gathering data about specific features You can gather debugging information about specific features by using the oc adm must-gather CLI command with the --image or --image-stream argument. The must-gather tool supports multiple images, so you can gather data about more than one feature by running a single command. Table 5.1. Supported must-gather images Image Purpose registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.12.16 Data collection for OpenShift Virtualization. registry.redhat.io/openshift-serverless-1/svls-must-gather-rhel8 Data collection for OpenShift Serverless. registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:<installed_version_service_mesh> Data collection for Red Hat OpenShift Service Mesh. registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v<installed_version_migration_toolkit> Data collection for the Migration Toolkit for Containers. registry.redhat.io/odf4/ocs-must-gather-rhel8:v<installed_version_ODF> Data collection for Red Hat OpenShift Data Foundation. registry.redhat.io/openshift-logging/cluster-logging-rhel9-operator:v<installed_version_logging> Data collection for logging. quay.io/netobserv/must-gather Data collection for the Network Observability Operator. registry.redhat.io/openshift4/ose-csi-driver-shared-resource-mustgather-rhel8 Data collection for OpenShift Shared Resource CSI Driver. registry.redhat.io/openshift4/ose-local-storage-mustgather-rhel8:v<installed_version_LSO> Data collection for Local Storage Operator. registry.redhat.io/openshift-sandboxed-containers/osc-must-gather-rhel8:v<installed_version_sandboxed_containers> Data collection for OpenShift sandboxed containers. registry.redhat.io/workload-availability/self-node-remediation-must-gather-rhel8:v<installed-version-SNR> Data collection for the Self Node Remediation (SNR) Operator and the Node Health Check (NHC) Operator. registry.redhat.io/openshift4/ptp-must-gather-rhel8:v<installed-version-ptp> Data collection for the PTP Operator. registry.redhat.io/workload-availability/node-maintenance-must-gather-rhel8:v<installed-version-NMO> Data collection for the Node Maintenance Operator (NMO). quay.io/openshift-pipeline/must-gather Data collection for Red Hat OpenShift Pipelines registry.redhat.io/openshift-gitops-1/must-gather-rhel8:v<installed_version_GitOps> Data collection for Red Hat OpenShift GitOps. registry.redhat.io/lvms4/lvms-must-gather-rhel8:v<installed_version_LVMS> Data collection for the LVM Operator. registry.redhat.io/compliance/openshift-compliance-must-gather-rhel8:<digest-version> Data collection for the Compliance Operator. registry.redhat.io/rhacm2/acm-must-gather-rhel9:v<ACM_version> Data collection for Red Hat Advanced Cluster Management (RHACM) 2.10 and later. registry.redhat.io/rhacm2/acm-must-gather-rhel8:v<ACM_version> Data collection for RHACM 2.9 and earlier. <registry_name:port_number>/rhacm2/acm-must-gather-rhel9:v<ACM_version> Data collection for RHACM 2.10 and later in a disconnected environment. <registry_name:port_number>/rhacm2/acm-must-gather-rhel8:v<ACM_version> Data collection for RHACM 2.9 and earlier in a disconnected environment. Note To determine the latest version for an OpenShift Container Platform component's image, see the Red Hat OpenShift Container Platform Life Cycle Policy web page on the Red Hat Customer Portal. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift Container Platform CLI ( oc ) installed. Procedure Navigate to the directory where you want to store the must-gather data. Run the oc adm must-gather command with one or more --image or --image-stream arguments. Note To collect the default must-gather data in addition to specific feature data, add the --image-stream=openshift/must-gather argument. For information on gathering data about the Custom Metrics Autoscaler, see the Additional resources section that follows. For example, the following command gathers both the default cluster data and information specific to OpenShift Virtualization: USD oc adm must-gather \ --image-stream=openshift/must-gather \ 1 --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.12.16 2 1 The default OpenShift Container Platform must-gather image 2 The must-gather image for OpenShift Virtualization You can use the must-gather tool with additional arguments to gather data that is specifically related to OpenShift Logging and the Red Hat OpenShift Logging Operator in your cluster. For OpenShift Logging, run the following command: USD oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator \ -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}') Example 5.1. Example must-gather output for OpenShift Logging β”œβ”€β”€ cluster-logging β”‚ β”œβ”€β”€ clo β”‚ β”‚ β”œβ”€β”€ cluster-logging-operator-74dd5994f-6ttgt β”‚ β”‚ β”œβ”€β”€ clusterlogforwarder_cr β”‚ β”‚ β”œβ”€β”€ cr β”‚ β”‚ β”œβ”€β”€ csv β”‚ β”‚ β”œβ”€β”€ deployment β”‚ β”‚ └── logforwarding_cr β”‚ β”œβ”€β”€ collector β”‚ β”‚ β”œβ”€β”€ fluentd-2tr64 β”‚ β”œβ”€β”€ eo β”‚ β”‚ β”œβ”€β”€ csv β”‚ β”‚ β”œβ”€β”€ deployment β”‚ β”‚ └── elasticsearch-operator-7dc7d97b9d-jb4r4 β”‚ β”œβ”€β”€ es β”‚ β”‚ β”œβ”€β”€ cluster-elasticsearch β”‚ β”‚ β”‚ β”œβ”€β”€ aliases β”‚ β”‚ β”‚ β”œβ”€β”€ health β”‚ β”‚ β”‚ β”œβ”€β”€ indices β”‚ β”‚ β”‚ β”œβ”€β”€ latest_documents.json β”‚ β”‚ β”‚ β”œβ”€β”€ nodes β”‚ β”‚ β”‚ β”œβ”€β”€ nodes_stats.json β”‚ β”‚ β”‚ └── thread_pool β”‚ β”‚ β”œβ”€β”€ cr β”‚ β”‚ β”œβ”€β”€ elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms β”‚ β”‚ └── logs β”‚ β”‚ β”œβ”€β”€ elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms β”‚ β”œβ”€β”€ install β”‚ β”‚ β”œβ”€β”€ co_logs β”‚ β”‚ β”œβ”€β”€ install_plan β”‚ β”‚ β”œβ”€β”€ olmo_logs β”‚ β”‚ └── subscription β”‚ └── kibana β”‚ β”œβ”€β”€ cr β”‚ β”œβ”€β”€ kibana-9d69668d4-2rkvz β”œβ”€β”€ cluster-scoped-resources β”‚ └── core β”‚ β”œβ”€β”€ nodes β”‚ β”‚ β”œβ”€β”€ ip-10-0-146-180.eu-west-1.compute.internal.yaml β”‚ └── persistentvolumes β”‚ β”œβ”€β”€ pvc-0a8d65d9-54aa-4c44-9ecc-33d9381e41c1.yaml β”œβ”€β”€ event-filter.html β”œβ”€β”€ gather-debug.log └── namespaces β”œβ”€β”€ openshift-logging β”‚ β”œβ”€β”€ apps β”‚ β”‚ β”œβ”€β”€ daemonsets.yaml β”‚ β”‚ β”œβ”€β”€ deployments.yaml β”‚ β”‚ β”œβ”€β”€ replicasets.yaml β”‚ β”‚ └── statefulsets.yaml β”‚ β”œβ”€β”€ batch β”‚ β”‚ β”œβ”€β”€ cronjobs.yaml β”‚ β”‚ └── jobs.yaml β”‚ β”œβ”€β”€ core β”‚ β”‚ β”œβ”€β”€ configmaps.yaml β”‚ β”‚ β”œβ”€β”€ endpoints.yaml β”‚ β”‚ β”œβ”€β”€ events β”‚ β”‚ β”‚ β”œβ”€β”€ elasticsearch-im-app-1596020400-gm6nl.1626341a296c16a1.yaml β”‚ β”‚ β”‚ β”œβ”€β”€ elasticsearch-im-audit-1596020400-9l9n4.1626341a2af81bbd.yaml β”‚ β”‚ β”‚ β”œβ”€β”€ elasticsearch-im-infra-1596020400-v98tk.1626341a2d821069.yaml β”‚ β”‚ β”‚ β”œβ”€β”€ elasticsearch-im-app-1596020400-cc5vc.1626341a3019b238.yaml β”‚ β”‚ β”‚ β”œβ”€β”€ elasticsearch-im-audit-1596020400-s8d5s.1626341a31f7b315.yaml β”‚ β”‚ β”‚ β”œβ”€β”€ elasticsearch-im-infra-1596020400-7mgv8.1626341a35ea59ed.yaml β”‚ β”‚ β”œβ”€β”€ events.yaml β”‚ β”‚ β”œβ”€β”€ persistentvolumeclaims.yaml β”‚ β”‚ β”œβ”€β”€ pods.yaml β”‚ β”‚ β”œβ”€β”€ replicationcontrollers.yaml β”‚ β”‚ β”œβ”€β”€ secrets.yaml β”‚ β”‚ └── services.yaml β”‚ β”œβ”€β”€ openshift-logging.yaml β”‚ β”œβ”€β”€ pods β”‚ β”‚ β”œβ”€β”€ cluster-logging-operator-74dd5994f-6ttgt β”‚ β”‚ β”‚ β”œβ”€β”€ cluster-logging-operator β”‚ β”‚ β”‚ β”‚ └── cluster-logging-operator β”‚ β”‚ β”‚ β”‚ └── logs β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ current.log β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ .insecure.log β”‚ β”‚ β”‚ β”‚ └── .log β”‚ β”‚ β”‚ └── cluster-logging-operator-74dd5994f-6ttgt.yaml β”‚ β”‚ β”œβ”€β”€ cluster-logging-operator-registry-6df49d7d4-mxxff β”‚ β”‚ β”‚ β”œβ”€β”€ cluster-logging-operator-registry β”‚ β”‚ β”‚ β”‚ └── cluster-logging-operator-registry β”‚ β”‚ β”‚ β”‚ └── logs β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ current.log β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ .insecure.log β”‚ β”‚ β”‚ β”‚ └── .log β”‚ β”‚ β”‚ β”œβ”€β”€ cluster-logging-operator-registry-6df49d7d4-mxxff.yaml β”‚ β”‚ β”‚ └── mutate-csv-and-generate-sqlite-db β”‚ β”‚ β”‚ └── mutate-csv-and-generate-sqlite-db β”‚ β”‚ β”‚ └── logs β”‚ β”‚ β”‚ β”œβ”€β”€ current.log β”‚ β”‚ β”‚ β”œβ”€β”€ .insecure.log β”‚ β”‚ β”‚ └── .log β”‚ β”‚ β”œβ”€β”€ elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms β”‚ β”‚ β”œβ”€β”€ elasticsearch-im-app-1596030300-bpgcx β”‚ β”‚ β”‚ β”œβ”€β”€ elasticsearch-im-app-1596030300-bpgcx.yaml β”‚ β”‚ β”‚ └── indexmanagement β”‚ β”‚ β”‚ └── indexmanagement β”‚ β”‚ β”‚ └── logs β”‚ β”‚ β”‚ β”œβ”€β”€ current.log β”‚ β”‚ β”‚ β”œβ”€β”€ .insecure.log β”‚ β”‚ β”‚ └── .log β”‚ β”‚ β”œβ”€β”€ fluentd-2tr64 β”‚ β”‚ β”‚ β”œβ”€β”€ fluentd β”‚ β”‚ β”‚ β”‚ └── fluentd β”‚ β”‚ β”‚ β”‚ └── logs β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ current.log β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ .insecure.log β”‚ β”‚ β”‚ β”‚ └── .log β”‚ β”‚ β”‚ β”œβ”€β”€ fluentd-2tr64.yaml β”‚ β”‚ β”‚ └── fluentd-init β”‚ β”‚ β”‚ └── fluentd-init β”‚ β”‚ β”‚ └── logs β”‚ β”‚ β”‚ β”œβ”€β”€ current.log β”‚ β”‚ β”‚ β”œβ”€β”€ .insecure.log β”‚ β”‚ β”‚ └── .log β”‚ β”‚ β”œβ”€β”€ kibana-9d69668d4-2rkvz β”‚ β”‚ β”‚ β”œβ”€β”€ kibana β”‚ β”‚ β”‚ β”‚ └── kibana β”‚ β”‚ β”‚ β”‚ └── logs β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ current.log β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ .insecure.log β”‚ β”‚ β”‚ β”‚ └── .log β”‚ β”‚ β”‚ β”œβ”€β”€ kibana-9d69668d4-2rkvz.yaml β”‚ β”‚ β”‚ └── kibana-proxy β”‚ β”‚ β”‚ └── kibana-proxy β”‚ β”‚ β”‚ └── logs β”‚ β”‚ β”‚ β”œβ”€β”€ current.log β”‚ β”‚ β”‚ β”œβ”€β”€ .insecure.log β”‚ β”‚ β”‚ └── .log β”‚ └── route.openshift.io β”‚ └── routes.yaml └── openshift-operators-redhat β”œβ”€β”€ ... Run the oc adm must-gather command with one or more --image or --image-stream arguments. For example, the following command gathers both the default cluster data and information specific to KubeVirt: USD oc adm must-gather \ --image-stream=openshift/must-gather \ 1 --image=quay.io/kubevirt/must-gather 2 1 The default OpenShift Container Platform must-gather image 2 The must-gather image for KubeVirt Create a compressed file from the must-gather directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1 1 Make sure to replace must-gather-local.5421342344627712289/ with the actual directory name. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal. 5.2. Additional resources Gathering debugging data for the Custom Metrics Autoscaler. Red Hat OpenShift Container Platform Life Cycle Policy 5.2.1. Gathering audit logs You can gather audit logs, which are a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system. You can gather audit logs for: etcd server Kubernetes API server OpenShift OAuth API server OpenShift API server Procedure Run the oc adm must-gather command with -- /usr/bin/gather_audit_logs : USD oc adm must-gather -- /usr/bin/gather_audit_logs Create a compressed file from the must-gather directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1 1 Replace must-gather-local.472290403699006248 with the actual directory name. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal. 5.2.2. Gathering network logs You can gather network logs on all nodes in a cluster. Procedure Run the oc adm must-gather command with -- gather_network_logs : USD oc adm must-gather -- gather_network_logs Create a compressed file from the must-gather directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1 1 Replace must-gather-local.472290403699006248 with the actual directory name. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal. 5.3. Obtaining your cluster ID When providing information to Red Hat Support, it is helpful to provide the unique identifier for your cluster. You can have your cluster ID autofilled by using the OpenShift Container Platform web console. You can also manually obtain your cluster ID by using the web console or the OpenShift CLI ( oc ). Prerequisites Access to the cluster as a user with the cluster-admin role. Access to the web console or the OpenShift CLI ( oc ) installed. Procedure To open a support case and have your cluster ID autofilled using the web console: From the toolbar, navigate to (?) Help Open Support Case . The Cluster ID value is autofilled. To manually obtain your cluster ID using the web console: Navigate to Home Overview . The value is available in the Cluster ID field of the Details section. To obtain your cluster ID using the OpenShift CLI ( oc ), run the following command: USD oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}' 5.4. About sosreport sosreport is a tool that collects configuration details, system information, and diagnostic data from Red Hat Enterprise Linux (RHEL) and Red Hat Enterprise Linux CoreOS (RHCOS) systems. sosreport provides a standardized way to collect diagnostic information relating to a node, which can then be provided to Red Hat Support for issue diagnosis. In some support interactions, Red Hat Support may ask you to collect a sosreport archive for a specific OpenShift Container Platform node. For example, it might sometimes be necessary to review system logs or other node-specific data that is not included within the output of oc adm must-gather . 5.5. Generating a sosreport archive for an OpenShift Container Platform cluster node The recommended way to generate a sosreport for an OpenShift Container Platform 4.12 cluster node is through a debug pod. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have SSH access to your hosts. You have installed the OpenShift CLI ( oc ). You have a Red Hat standard or premium Subscription. You have a Red Hat Customer Portal account. You have an existing Red Hat Support case ID. Procedure Obtain a list of cluster nodes: USD oc get nodes Enter into a debug session on the target node. This step instantiates a debug pod called <node_name>-debug : USD oc debug node/my-cluster-node To enter into a debug session on the target node that is tainted with the NoExecute effect, add a toleration to a dummy namespace, and start the debug pod in the dummy namespace: USD oc new-project dummy USD oc patch namespace dummy --type=merge -p '{"metadata": {"annotations": { "scheduler.alpha.kubernetes.io/defaultTolerations": "[{\"operator\": \"Exists\"}]"}}}' USD oc debug node/my-cluster-node Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Note OpenShift Container Platform 4.12 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. Start a toolbox container, which includes the required binaries and plugins to run sosreport : # toolbox Note If an existing toolbox pod is already running, the toolbox command outputs 'toolbox-' already exists. Trying to start... . Remove the running toolbox container with podman rm toolbox- and spawn a new toolbox container, to avoid issues with sosreport plugins. Collect a sosreport archive. Run the sosreport command and enable the crio.all and crio.logs CRI-O container engine sosreport plugins: # sos report -k crio.all=on -k crio.logs=on -k podman.all=on -k podman.logs=on 1 1 -k enables you to define sosreport plugin parameters outside of the defaults. Press Enter when prompted, to continue. Provide the Red Hat Support case ID. sosreport adds the ID to the archive's file name. The sosreport output provides the archive's location and checksum. The following sample output references support case ID 01234567 : Your sosreport has been generated and saved in: /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1 The checksum is: 382ffc167510fd71b4f12a4f40b97a4e 1 The sosreport archive's file path is outside of the chroot environment because the toolbox container mounts the host's root directory at /host . Provide the sosreport archive to Red Hat Support for analysis, using one of the following methods. Upload the file to an existing Red Hat support case directly from an OpenShift Container Platform cluster. From within the toolbox container, run redhat-support-tool to attach the archive directly to an existing Red Hat support case. This example uses support case ID 01234567 : # redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-sosreport.tar.xz 1 1 The toolbox container mounts the host's root directory at /host . Reference the absolute path from the toolbox container's root directory, including /host/ , when specifying files to upload through the redhat-support-tool command. Upload the file to an existing Red Hat support case. Concatenate the sosreport archive by running the oc debug node/<node_name> command and redirect the output to a file. This command assumes you have exited the oc debug session: USD oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz' > /tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1 1 The debug container mounts the host's root directory at /host . Reference the absolute path from the debug container's root directory, including /host , when specifying target files for concatenation. Note OpenShift Container Platform 4.12 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring a sosreport archive from a cluster node by using scp is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to copy a sosreport archive from a node by running scp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path> . Navigate to an existing support case within the Customer Support page of the Red Hat Customer Portal. Select Attach files and follow the prompts to upload the file. 5.6. Querying bootstrap node journal logs If you experience bootstrap-related issues, you can gather bootkube.service journald unit logs and container logs from the bootstrap node. Prerequisites You have SSH access to your bootstrap node. You have the fully qualified domain name of the bootstrap node. Procedure Query bootkube.service journald unit logs from a bootstrap node during OpenShift Container Platform installation. Replace <bootstrap_fqdn> with the bootstrap node's fully qualified domain name: USD ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service Note The bootkube.service log on the bootstrap node outputs etcd connection refused errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop. Collect logs from the bootstrap node containers using podman on the bootstrap node. Replace <bootstrap_fqdn> with the bootstrap node's fully qualified domain name: USD ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done' 5.7. Querying cluster node journal logs You can gather journald unit logs and other logs within /var/log on individual cluster nodes. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). You have SSH access to your hosts. Procedure Query kubelet journald unit logs from OpenShift Container Platform cluster nodes. The following example queries control plane nodes only: USD oc adm node-logs --role=master -u kubelet 1 1 Replace kubelet as appropriate to query other unit logs. Collect logs from specific subdirectories under /var/log/ on cluster nodes. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists files in /var/log/openshift-apiserver/ on all control plane nodes: USD oc adm node-logs --role=master --path=openshift-apiserver Inspect a specific log within a /var/log/ subdirectory. The following example outputs /var/log/openshift-apiserver/audit.log contents from all control plane nodes: USD oc adm node-logs --role=master --path=openshift-apiserver/audit.log If the API is not functional, review the logs on each node using SSH instead. The following example tails /var/log/openshift-apiserver/audit.log : USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log Note OpenShift Container Platform 4.12 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . 5.8. Network trace methods Collecting network traces, in the form of packet capture records, can assist Red Hat Support with troubleshooting network issues. OpenShift Container Platform supports two ways of performing a network trace. Review the following table and choose the method that meets your needs. Table 5.2. Supported methods of collecting a network trace Method Benefits and capabilities Collecting a host network trace You perform a packet capture for a duration that you specify on one or more nodes at the same time. The packet capture files are transferred from nodes to the client machine when the specified duration is met. You can troubleshoot why a specific action triggers network communication issues. Run the packet capture, perform the action that triggers the issue, and use the logs to diagnose the issue. Collecting a network trace from an OpenShift Container Platform node or container You perform a packet capture on one node or one container. You run the tcpdump command interactively, so you can control the duration of the packet capture. You can start the packet capture manually, trigger the network communication issue, and then stop the packet capture manually. This method uses the cat command and shell redirection to copy the packet capture data from the node or container to the client machine. 5.9. Collecting a host network trace Sometimes, troubleshooting a network-related issue is simplified by tracing network communication and capturing packets on multiple nodes at the same time. You can use a combination of the oc adm must-gather command and the registry.redhat.io/openshift4/network-tools-rhel8 container image to gather packet captures from nodes. Analyzing packet captures can help you troubleshoot network communication issues. The oc adm must-gather command is used to run the tcpdump command in pods on specific nodes. The tcpdump command records the packet captures in the pods. When the tcpdump command exits, the oc adm must-gather command transfers the files with the packet captures from the pods to your client machine. Tip The sample command in the following procedure demonstrates performing a packet capture with the tcpdump command. However, you can run any command in the container image that is specified in the --image argument to gather troubleshooting information from multiple nodes at the same time. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Run a packet capture from the host network on some nodes by running the following command: USD oc adm must-gather \ --dest-dir /tmp/captures \ <.> --source-dir '/tmp/tcpdump/' \ <.> --image registry.redhat.io/openshift4/network-tools-rhel8:latest \ <.> --node-selector 'node-role.kubernetes.io/worker' \ <.> --host-network=true \ <.> --timeout 30s \ <.> -- \ tcpdump -i any \ <.> -w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300 <.> The --dest-dir argument specifies that oc adm must-gather stores the packet captures in directories that are relative to /tmp/captures on the client machine. You can specify any writable directory. <.> When tcpdump is run in the debug pod that oc adm must-gather starts, the --source-dir argument specifies that the packet captures are temporarily stored in the /tmp/tcpdump directory on the pod. <.> The --image argument specifies a container image that includes the tcpdump command. <.> The --node-selector argument and example value specifies to perform the packet captures on the worker nodes. As an alternative, you can specify the --node-name argument instead to run the packet capture on a single node. If you omit both the --node-selector and the --node-name argument, the packet captures are performed on all nodes. <.> The --host-network=true argument is required so that the packet captures are performed on the network interfaces of the node. <.> The --timeout argument and value specify to run the debug pod for 30 seconds. If you do not specify the --timeout argument and a duration, the debug pod runs for 10 minutes. <.> The -i any argument for the tcpdump command specifies to capture packets on all network interfaces. As an alternative, you can specify a network interface name. Perform the action, such as accessing a web application, that triggers the network communication issue while the network trace captures packets. Review the packet capture files that oc adm must-gather transferred from the pods to your client machine: tmp/captures β”œβ”€β”€ event-filter.html β”œβ”€β”€ ip-10-0-192-217-ec2-internal 1 β”‚ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca... β”‚ └── 2022-01-13T19:31:31.pcap β”œβ”€β”€ ip-10-0-201-178-ec2-internal 2 β”‚ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca... β”‚ └── 2022-01-13T19:31:30.pcap β”œβ”€β”€ ip-... └── timestamp 1 2 The packet captures are stored in directories that identify the hostname, container, and file name. If you did not specify the --node-selector argument, then the directory level for the hostname is not present. 5.10. Collecting a network trace from an OpenShift Container Platform node or container When investigating potential network-related OpenShift Container Platform issues, Red Hat Support might request a network packet trace from a specific OpenShift Container Platform cluster node or from a specific container. The recommended method to capture a network trace in OpenShift Container Platform is through a debug pod. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have a Red Hat standard or premium Subscription. You have a Red Hat Customer Portal account. You have an existing Red Hat Support case ID. You have SSH access to your hosts. Procedure Obtain a list of cluster nodes: USD oc get nodes Enter into a debug session on the target node. This step instantiates a debug pod called <node_name>-debug : USD oc debug node/my-cluster-node Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Note OpenShift Container Platform 4.12 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. From within the chroot environment console, obtain the node's interface names: # ip ad Start a toolbox container, which includes the required binaries and plugins to run sosreport : # toolbox Note If an existing toolbox pod is already running, the toolbox command outputs 'toolbox-' already exists. Trying to start... . To avoid tcpdump issues, remove the running toolbox container with podman rm toolbox- and spawn a new toolbox container. Initiate a tcpdump session on the cluster node and redirect output to a capture file. This example uses ens5 as the interface name: USD tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1 1 The tcpdump capture file's path is outside of the chroot environment because the toolbox container mounts the host's root directory at /host . If a tcpdump capture is required for a specific container on the node, follow these steps. Determine the target container ID. The chroot host command precedes the crictl command in this step because the toolbox container mounts the host's root directory at /host : # chroot /host crictl ps Determine the container's process ID. In this example, the container ID is a7fe32346b120 : # chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print USD2}' Initiate a tcpdump session on the container and redirect output to a capture file. This example uses 49628 as the container's process ID and ens5 as the interface name. The nsenter command enters the namespace of a target process and runs a command in its namespace. because the target process in this example is a container's process ID, the tcpdump command is run in the container's namespace from the host: # nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1 1 The tcpdump capture file's path is outside of the chroot environment because the toolbox container mounts the host's root directory at /host . Provide the tcpdump capture file to Red Hat Support for analysis, using one of the following methods. Upload the file to an existing Red Hat support case directly from an OpenShift Container Platform cluster. From within the toolbox container, run redhat-support-tool to attach the file directly to an existing Red Hat Support case. This example uses support case ID 01234567 : # redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-tcpdump-capture-file.pcap 1 1 The toolbox container mounts the host's root directory at /host . Reference the absolute path from the toolbox container's root directory, including /host/ , when specifying files to upload through the redhat-support-tool command. Upload the file to an existing Red Hat support case. Concatenate the sosreport archive by running the oc debug node/<node_name> command and redirect the output to a file. This command assumes you have exited the oc debug session: USD oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap 1 1 The debug container mounts the host's root directory at /host . Reference the absolute path from the debug container's root directory, including /host , when specifying target files for concatenation. Note OpenShift Container Platform 4.12 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring a tcpdump capture file from a cluster node by using scp is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to copy a tcpdump capture file from a node by running scp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path> . Navigate to an existing support case within the Customer Support page of the Red Hat Customer Portal. Select Attach files and follow the prompts to upload the file. 5.11. Providing diagnostic data to Red Hat Support When investigating OpenShift Container Platform issues, Red Hat Support might ask you to upload diagnostic data to a support case. Files can be uploaded to a support case through the Red Hat Customer Portal, or from an OpenShift Container Platform cluster directly by using the redhat-support-tool command. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have SSH access to your hosts. You have installed the OpenShift CLI ( oc ). You have a Red Hat standard or premium Subscription. You have a Red Hat Customer Portal account. You have an existing Red Hat Support case ID. Procedure Upload diagnostic data to an existing Red Hat support case through the Red Hat Customer Portal. Concatenate a diagnostic file contained on an OpenShift Container Platform node by using the oc debug node/<node_name> command and redirect the output to a file. The following example copies /host/var/tmp/my-diagnostic-data.tar.gz from a debug container to /var/tmp/my-diagnostic-data.tar.gz : USD oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz 1 1 The debug container mounts the host's root directory at /host . Reference the absolute path from the debug container's root directory, including /host , when specifying target files for concatenation. Note OpenShift Container Platform 4.12 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring files from a cluster node by using scp is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to copy diagnostic files from a node by running scp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path> . Navigate to an existing support case within the Customer Support page of the Red Hat Customer Portal. Select Attach files and follow the prompts to upload the file. Upload diagnostic data to an existing Red Hat support case directly from an OpenShift Container Platform cluster. Obtain a list of cluster nodes: USD oc get nodes Enter into a debug session on the target node. This step instantiates a debug pod called <node_name>-debug : USD oc debug node/my-cluster-node Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Note OpenShift Container Platform 4.12 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. Start a toolbox container, which includes the required binaries to run redhat-support-tool : # toolbox Note If an existing toolbox pod is already running, the toolbox command outputs 'toolbox-' already exists. Trying to start... . Remove the running toolbox container with podman rm toolbox- and spawn a new toolbox container, to avoid issues. Run redhat-support-tool to attach a file from the debug pod directly to an existing Red Hat Support case. This example uses support case ID '01234567' and example file path /host/var/tmp/my-diagnostic-data.tar.gz : # redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-diagnostic-data.tar.gz 1 1 The toolbox container mounts the host's root directory at /host . Reference the absolute path from the toolbox container's root directory, including /host/ , when specifying files to upload through the redhat-support-tool command. 5.12. About toolbox toolbox is a tool that starts a container on a Red Hat Enterprise Linux CoreOS (RHCOS) system. The tool is primarily used to start a container that includes the required binaries and plugins that are needed to run commands such as sosreport and redhat-support-tool . The primary purpose for a toolbox container is to gather diagnostic information and to provide it to Red Hat Support. However, if additional diagnostic tools are required, you can add RPM packages or run an image that is an alternative to the standard support tools image. Installing packages to a toolbox container By default, running the toolbox command starts a container with the registry.redhat.io/rhel8/support-tools:latest image. This image contains the most frequently used support tools. If you need to collect node-specific data that requires a support tool that is not part of the image, you can install additional packages. Prerequisites You have accessed a node with the oc debug node/<node_name> command. Procedure Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Start the toolbox container: # toolbox Install the additional package, such as wget : # dnf install -y <package_name> Starting an alternative image with toolbox By default, running the toolbox command starts a container with the registry.redhat.io/rhel8/support-tools:latest image. You can start an alternative image by creating a .toolboxrc file and specifying the image to run. Prerequisites You have accessed a node with the oc debug node/<node_name> command. Procedure Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Create a .toolboxrc file in the home directory for the root user ID: # vi ~/.toolboxrc REGISTRY=quay.io 1 IMAGE=fedora/fedora:33-x86_64 2 TOOLBOX_NAME=toolbox-fedora-33 3 1 Optional: Specify an alternative container registry. 2 Specify an alternative image to start. 3 Optional: Specify an alternative name for the toolbox container. Start a toolbox container with the alternative image: # toolbox Note If an existing toolbox pod is already running, the toolbox command outputs 'toolbox-' already exists. Trying to start... . Remove the running toolbox container with podman rm toolbox- and spawn a new toolbox container, to avoid issues with sosreport plugins.
[ "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.12.0", "oc adm must-gather -- /usr/bin/gather_audit_logs", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s", "oc import-image is/must-gather -n openshift", "oc adm must-gather", "tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1", "oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.12.16 2", "oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')", "β”œβ”€β”€ cluster-logging β”‚ β”œβ”€β”€ clo β”‚ β”‚ β”œβ”€β”€ cluster-logging-operator-74dd5994f-6ttgt β”‚ β”‚ β”œβ”€β”€ clusterlogforwarder_cr β”‚ β”‚ β”œβ”€β”€ cr β”‚ β”‚ β”œβ”€β”€ csv β”‚ β”‚ β”œβ”€β”€ deployment β”‚ β”‚ └── logforwarding_cr β”‚ β”œβ”€β”€ collector β”‚ β”‚ β”œβ”€β”€ fluentd-2tr64 β”‚ β”œβ”€β”€ eo β”‚ β”‚ β”œβ”€β”€ csv β”‚ β”‚ β”œβ”€β”€ deployment β”‚ β”‚ └── elasticsearch-operator-7dc7d97b9d-jb4r4 β”‚ β”œβ”€β”€ es β”‚ β”‚ β”œβ”€β”€ cluster-elasticsearch β”‚ β”‚ β”‚ β”œβ”€β”€ aliases β”‚ β”‚ β”‚ β”œβ”€β”€ health β”‚ β”‚ β”‚ β”œβ”€β”€ indices β”‚ β”‚ β”‚ β”œβ”€β”€ latest_documents.json β”‚ β”‚ β”‚ β”œβ”€β”€ nodes β”‚ β”‚ β”‚ β”œβ”€β”€ nodes_stats.json β”‚ β”‚ β”‚ └── thread_pool β”‚ β”‚ β”œβ”€β”€ cr β”‚ β”‚ β”œβ”€β”€ elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms β”‚ β”‚ └── logs β”‚ β”‚ β”œβ”€β”€ elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms β”‚ β”œβ”€β”€ install β”‚ β”‚ β”œβ”€β”€ co_logs β”‚ β”‚ β”œβ”€β”€ install_plan β”‚ β”‚ β”œβ”€β”€ olmo_logs β”‚ β”‚ └── subscription β”‚ └── kibana β”‚ β”œβ”€β”€ cr β”‚ β”œβ”€β”€ kibana-9d69668d4-2rkvz β”œβ”€β”€ cluster-scoped-resources β”‚ └── core β”‚ β”œβ”€β”€ nodes β”‚ β”‚ β”œβ”€β”€ ip-10-0-146-180.eu-west-1.compute.internal.yaml β”‚ └── persistentvolumes β”‚ β”œβ”€β”€ pvc-0a8d65d9-54aa-4c44-9ecc-33d9381e41c1.yaml β”œβ”€β”€ event-filter.html β”œβ”€β”€ gather-debug.log └── namespaces β”œβ”€β”€ openshift-logging β”‚ β”œβ”€β”€ apps β”‚ β”‚ β”œβ”€β”€ daemonsets.yaml β”‚ β”‚ β”œβ”€β”€ deployments.yaml β”‚ β”‚ β”œβ”€β”€ replicasets.yaml β”‚ β”‚ └── statefulsets.yaml β”‚ β”œβ”€β”€ batch β”‚ β”‚ β”œβ”€β”€ cronjobs.yaml β”‚ β”‚ └── jobs.yaml β”‚ β”œβ”€β”€ core β”‚ β”‚ β”œβ”€β”€ configmaps.yaml β”‚ β”‚ β”œβ”€β”€ endpoints.yaml β”‚ β”‚ β”œβ”€β”€ events β”‚ β”‚ β”‚ β”œβ”€β”€ elasticsearch-im-app-1596020400-gm6nl.1626341a296c16a1.yaml β”‚ β”‚ β”‚ β”œβ”€β”€ elasticsearch-im-audit-1596020400-9l9n4.1626341a2af81bbd.yaml β”‚ β”‚ β”‚ β”œβ”€β”€ elasticsearch-im-infra-1596020400-v98tk.1626341a2d821069.yaml β”‚ β”‚ β”‚ β”œβ”€β”€ elasticsearch-im-app-1596020400-cc5vc.1626341a3019b238.yaml β”‚ β”‚ β”‚ β”œβ”€β”€ elasticsearch-im-audit-1596020400-s8d5s.1626341a31f7b315.yaml β”‚ β”‚ β”‚ β”œβ”€β”€ elasticsearch-im-infra-1596020400-7mgv8.1626341a35ea59ed.yaml β”‚ β”‚ β”œβ”€β”€ events.yaml β”‚ β”‚ β”œβ”€β”€ persistentvolumeclaims.yaml β”‚ β”‚ β”œβ”€β”€ pods.yaml β”‚ β”‚ β”œβ”€β”€ replicationcontrollers.yaml β”‚ β”‚ β”œβ”€β”€ secrets.yaml β”‚ β”‚ └── services.yaml β”‚ β”œβ”€β”€ openshift-logging.yaml β”‚ β”œβ”€β”€ pods β”‚ β”‚ β”œβ”€β”€ cluster-logging-operator-74dd5994f-6ttgt β”‚ β”‚ β”‚ β”œβ”€β”€ cluster-logging-operator β”‚ β”‚ β”‚ β”‚ └── cluster-logging-operator β”‚ β”‚ β”‚ β”‚ └── logs β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ current.log β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ previous.insecure.log β”‚ β”‚ β”‚ β”‚ └── previous.log β”‚ β”‚ β”‚ └── cluster-logging-operator-74dd5994f-6ttgt.yaml β”‚ β”‚ β”œβ”€β”€ cluster-logging-operator-registry-6df49d7d4-mxxff β”‚ β”‚ β”‚ β”œβ”€β”€ cluster-logging-operator-registry β”‚ β”‚ β”‚ β”‚ └── cluster-logging-operator-registry β”‚ β”‚ β”‚ β”‚ └── logs β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ current.log β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ previous.insecure.log β”‚ β”‚ β”‚ β”‚ └── previous.log β”‚ β”‚ β”‚ β”œβ”€β”€ cluster-logging-operator-registry-6df49d7d4-mxxff.yaml β”‚ β”‚ β”‚ └── mutate-csv-and-generate-sqlite-db β”‚ β”‚ β”‚ └── mutate-csv-and-generate-sqlite-db β”‚ β”‚ β”‚ └── logs β”‚ β”‚ β”‚ β”œβ”€β”€ current.log β”‚ β”‚ β”‚ β”œβ”€β”€ previous.insecure.log β”‚ β”‚ β”‚ └── previous.log β”‚ β”‚ β”œβ”€β”€ elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms β”‚ β”‚ β”œβ”€β”€ elasticsearch-im-app-1596030300-bpgcx β”‚ β”‚ β”‚ β”œβ”€β”€ elasticsearch-im-app-1596030300-bpgcx.yaml β”‚ β”‚ β”‚ └── indexmanagement β”‚ β”‚ β”‚ └── indexmanagement β”‚ β”‚ β”‚ └── logs β”‚ β”‚ β”‚ β”œβ”€β”€ current.log β”‚ β”‚ β”‚ β”œβ”€β”€ previous.insecure.log β”‚ β”‚ β”‚ └── previous.log β”‚ β”‚ β”œβ”€β”€ fluentd-2tr64 β”‚ β”‚ β”‚ β”œβ”€β”€ fluentd β”‚ β”‚ β”‚ β”‚ └── fluentd β”‚ β”‚ β”‚ β”‚ └── logs β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ current.log β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ previous.insecure.log β”‚ β”‚ β”‚ β”‚ └── previous.log β”‚ β”‚ β”‚ β”œβ”€β”€ fluentd-2tr64.yaml β”‚ β”‚ β”‚ └── fluentd-init β”‚ β”‚ β”‚ └── fluentd-init β”‚ β”‚ β”‚ └── logs β”‚ β”‚ β”‚ β”œβ”€β”€ current.log β”‚ β”‚ β”‚ β”œβ”€β”€ previous.insecure.log β”‚ β”‚ β”‚ └── previous.log β”‚ β”‚ β”œβ”€β”€ kibana-9d69668d4-2rkvz β”‚ β”‚ β”‚ β”œβ”€β”€ kibana β”‚ β”‚ β”‚ β”‚ └── kibana β”‚ β”‚ β”‚ β”‚ └── logs β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ current.log β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ previous.insecure.log β”‚ β”‚ β”‚ β”‚ └── previous.log β”‚ β”‚ β”‚ β”œβ”€β”€ kibana-9d69668d4-2rkvz.yaml β”‚ β”‚ β”‚ └── kibana-proxy β”‚ β”‚ β”‚ └── kibana-proxy β”‚ β”‚ β”‚ └── logs β”‚ β”‚ β”‚ β”œβ”€β”€ current.log β”‚ β”‚ β”‚ β”œβ”€β”€ previous.insecure.log β”‚ β”‚ β”‚ └── previous.log β”‚ └── route.openshift.io β”‚ └── routes.yaml └── openshift-operators-redhat β”œβ”€β”€", "oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=quay.io/kubevirt/must-gather 2", "tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1", "oc adm must-gather -- /usr/bin/gather_audit_logs", "tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1", "oc adm must-gather -- gather_network_logs", "tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1", "oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'", "oc get nodes", "oc debug node/my-cluster-node", "oc new-project dummy", "oc patch namespace dummy --type=merge -p '{\"metadata\": {\"annotations\": { \"scheduler.alpha.kubernetes.io/defaultTolerations\": \"[{\\\"operator\\\": \\\"Exists\\\"}]\"}}}'", "oc debug node/my-cluster-node", "chroot /host", "toolbox", "sos report -k crio.all=on -k crio.logs=on -k podman.all=on -k podman.logs=on 1", "Your sosreport has been generated and saved in: /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1 The checksum is: 382ffc167510fd71b4f12a4f40b97a4e", "redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-sosreport.tar.xz 1", "oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz' > /tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1", "ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service", "ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done'", "oc adm node-logs --role=master -u kubelet 1", "oc adm node-logs --role=master --path=openshift-apiserver", "oc adm node-logs --role=master --path=openshift-apiserver/audit.log", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log", "oc adm must-gather --dest-dir /tmp/captures \\ <.> --source-dir '/tmp/tcpdump/' \\ <.> --image registry.redhat.io/openshift4/network-tools-rhel8:latest \\ <.> --node-selector 'node-role.kubernetes.io/worker' \\ <.> --host-network=true \\ <.> --timeout 30s \\ <.> -- tcpdump -i any \\ <.> -w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300", "tmp/captures β”œβ”€β”€ event-filter.html β”œβ”€β”€ ip-10-0-192-217-ec2-internal 1 β”‚ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca β”‚ └── 2022-01-13T19:31:31.pcap β”œβ”€β”€ ip-10-0-201-178-ec2-internal 2 β”‚ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca β”‚ └── 2022-01-13T19:31:30.pcap β”œβ”€β”€ ip- └── timestamp", "oc get nodes", "oc debug node/my-cluster-node", "chroot /host", "ip ad", "toolbox", "tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1", "chroot /host crictl ps", "chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print USD2}'", "nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1", "redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-tcpdump-capture-file.pcap 1", "oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap 1", "oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz 1", "oc get nodes", "oc debug node/my-cluster-node", "chroot /host", "toolbox", "redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-diagnostic-data.tar.gz 1", "chroot /host", "toolbox", "dnf install -y <package_name>", "chroot /host", "vi ~/.toolboxrc", "REGISTRY=quay.io 1 IMAGE=fedora/fedora:33-x86_64 2 TOOLBOX_NAME=toolbox-fedora-33 3", "toolbox" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/support/gathering-cluster-data
Chapter 3. Enabling Linux control group version 1 (cgroup v1)
Chapter 3. Enabling Linux control group version 1 (cgroup v1) As of OpenShift Container Platform 4.14, OpenShift Container Platform uses Linux control group version 2 (cgroup v2) in your cluster. If you are using cgroup v1 on OpenShift Container Platform 4.13 or earlier, migrating to OpenShift Container Platform 4.15 will not automatically update your cgroup configuration to version 2. A fresh installation of OpenShift Container Platform 4.14 or later will use cgroup v2 by default. However, you can enable Linux control group version 1 (cgroup v1) upon installation. Enabling cgroup v1 in OpenShift Container Platform disables all cgroup v2 controllers and hierarchies in your cluster. cgroup v2 is the current version of the Linux cgroup API. cgroup v2 offers several improvements over cgroup v1, including a unified hierarchy, safer sub-tree delegation, new features such as Pressure Stall Information , and enhanced resource management and isolation. However, cgroup v2 has different CPU, memory, and I/O management characteristics than cgroup v1. Therefore, some workloads might experience slight differences in memory or CPU usage on clusters that run cgroup v2. You can switch between cgroup v1 and cgroup v2, as needed, by editing the node.config object. For more information, see "Configuring the Linux cgroup on your nodes" in the "Additional resources" of this section. 3.1. Enabling Linux cgroup v1 during installation You can enable Linux control group version 1 (cgroup v1) when you install a cluster by creating installation manifests. Procedure Create or edit the node.config object to specify the v1 cgroup: apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: cgroupMode: "v2" Proceed with the installation as usual. Additional resources OpenShift Container Platform installation overview Configuring the Linux cgroup on your nodes
[ "apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: cgroupMode: \"v2\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installation_configuration/enabling-cgroup-v1
Chapter 7. ImageStreamMapping [image.openshift.io/v1]
Chapter 7. ImageStreamMapping [image.openshift.io/v1] Description ImageStreamMapping represents a mapping from a single image stream tag to a container image as well as the reference to the container image stream the image came from. This resource is used by privileged integrators to create an image resource and to associate it with an image stream in the status tags field. Creating an ImageStreamMapping will allow any user who can view the image stream to tag or pull that image, so only create mappings where the user has proven they have access to the image contents directly. The only operation supported for this resource is create and the metadata name and namespace should be set to the image stream containing the tag that should be updated. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required image tag 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources image object Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata tag string Tag is a string value this image can be located with inside the stream. 7.1.1. .image Description Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources dockerImageConfig string DockerImageConfig is a JSON blob that the runtime uses to set up the container. This is a part of manifest schema v2. Will not be set when the image represents a manifest list. dockerImageLayers array DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. dockerImageLayers[] object ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. dockerImageManifest string DockerImageManifest is the raw JSON of the manifest dockerImageManifestMediaType string DockerImageManifestMediaType specifies the mediaType of manifest. This is a part of manifest schema v2. dockerImageManifests array DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. dockerImageManifests[] object ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. dockerImageMetadata RawExtension DockerImageMetadata contains metadata about this image dockerImageMetadataVersion string DockerImageMetadataVersion conveys the version of the object, which if empty defaults to "1.0" dockerImageReference string DockerImageReference is the string that can be used to pull this image. dockerImageSignatures array (string) DockerImageSignatures provides the signatures as opaque blobs. This is a part of manifest schema v1. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signatures array Signatures holds all signatures of the image. signatures[] object ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). 7.1.2. .image.dockerImageLayers Description DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. Type array 7.1.3. .image.dockerImageLayers[] Description ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. Type object Required name size mediaType Property Type Description mediaType string MediaType of the referenced object. name string Name of the layer as defined by the underlying store. size integer Size of the layer in bytes as defined by the underlying store. 7.1.4. .image.dockerImageManifests Description DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. Type array 7.1.5. .image.dockerImageManifests[] Description ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. Type object Required digest mediaType manifestSize architecture os Property Type Description architecture string Architecture specifies the supported CPU architecture, for example amd64 or ppc64le . digest string Digest is the unique identifier for the manifest. It refers to an Image object. manifestSize integer ManifestSize represents the size of the raw object contents, in bytes. mediaType string MediaType defines the type of the manifest, possible values are application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json or application/vnd.docker.distribution.manifest.v1+json. os string OS specifies the operating system, for example linux . variant string Variant is an optional field repreenting a variant of the CPU, for example v6 to specify a particular CPU variant of the ARM CPU. 7.1.6. .image.signatures Description Signatures holds all signatures of the image. Type array 7.1.7. .image.signatures[] Description ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required type content Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array Conditions represent the latest available observations of a signature's current state. conditions[] object SignatureCondition describes an image signature condition of particular kind at particular probe time. content string Required: An opaque binary string which is an image's signature. created Time If specified, it is the time of signature's creation. imageIdentity string A human readable string representing image's identity. It could be a product name and version, or an image pull spec (e.g. "registry.access.redhat.com/rhel7/rhel:7.2"). issuedBy object SignatureIssuer holds information about an issuer of signing certificate or key. issuedTo object SignatureSubject holds information about a person or entity who created the signature. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signedClaims object (string) Contains claims from the signature. type string Required: Describes a type of stored blob. 7.1.8. .image.signatures[].conditions Description Conditions represent the latest available observations of a signature's current state. Type array 7.1.9. .image.signatures[].conditions[] Description SignatureCondition describes an image signature condition of particular kind at particular probe time. Type object Required type status Property Type Description lastProbeTime Time Last time the condition was checked. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of signature condition, Complete or Failed. 7.1.10. .image.signatures[].issuedBy Description SignatureIssuer holds information about an issuer of signing certificate or key. Type object Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. 7.1.11. .image.signatures[].issuedTo Description SignatureSubject holds information about a person or entity who created the signature. Type object Required publicKeyID Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. publicKeyID string If present, it is a human readable key id of public key belonging to the subject used to verify image signature. It should contain at least 64 lowest bits of public key's fingerprint (e.g. 0x685ebe62bf278440). 7.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreammappings POST : create an ImageStreamMapping 7.2.1. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreammappings Table 7.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create an ImageStreamMapping Table 7.2. Body parameters Parameter Type Description body ImageStreamMapping schema Table 7.3. HTTP responses HTTP code Reponse body 200 - OK ImageStreamMapping schema 201 - Created ImageStreamMapping schema 202 - Accepted ImageStreamMapping schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/image_apis/imagestreammapping-image-openshift-io-v1
D.3. Controlling Activation with Tags
D.3. Controlling Activation with Tags You can specify in the configuration file that only certain logical volumes should be activated on that host. For example, the following entry acts as a filter for activation requests (such as vgchange -ay ) and only activates vg1/lvol0 and any logical volumes or volume groups with the database tag in the metadata on that host. There is a special match "@*" that causes a match only if any metadata tag matches any host tag on that machine. As another example, consider a situation where every machine in the cluster has the following entry in the configuration file: If you want to activate vg1/lvol2 only on host db2 , do the following: Run lvchange --addtag @db2 vg1/lvol2 from any host in the cluster. Run lvchange -ay vg1/lvol2 . This solution involves storing host names inside the volume group metadata.
[ "activation { volume_list = [\"vg1/lvol0\", \"@database\" ] }", "tags { hosttags = 1 }" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/tag_activation
1.4. Directory Design Overview
1.4. Directory Design Overview Planning the directory service before actual deployment is the most important task to ensure the success of the directory. The design process involves gathering data about the directory requirements, such as environment and data sources, users, and the applications that use the directory. This information is integral to designing an effective directory service because it helps identify the arrangement and functionality required. The flexibility of Directory Server means the directory design can be reworked to meet unexpected or changing requirements, even after the Directory Server is deployed. 1.4.1. Design Process Outline Chapter 2, Planning the Directory Data The directory contains data such as user names, telephone numbers, and group details. This chapter analyzes the various sources of data in the organization and understand their relationship with one another. It describes the types of data that can be stored in the directory and other tasks to perform to design the contents of the Directory Server. Chapter 3, Designing the Directory Schema The directory is designed to support one or more directory-enabled applications. These applications have requirements of the data stored in the directory, such as the file format. The directory schema determines the characteristics of the data stored in the directory. The standard schema shipped with Directory Server is introduced in this chapter, as well as a description of how to customize the schema and tips for maintaining a consistent schema. Chapter 4, Designing the Directory Tree Along with determining what information is contained in the Directory Server, it is important to determine how that information is going to be organized and referenced. This chapter introduces the directory tree and gives an overview of the design of the data hierarchy. Sample directory tree designs are also provided. Chapter 6, Designing the Directory Topology Topology design means how the directory tree is divided among multiple physical Directory Servers and how these servers communicate with one another. The general principles behind design, using multiple databases, the mechanisms available for linking the distributed data together, and how the directory itself keeps track of distributed data are all described in this chapter. Chapter 7, Designing the Replication Process When replication is used, multiple Directory Servers maintain the same directory data to increase performance and provide fault tolerance. This chapter describes how replication works, what kinds of data can be replicated, common replication scenarios, and tips for building a high-availability directory service. Chapter 8, Designing Synchronization The information stored in the Red Hat Directory Server can by synchronized with information stored in Microsoft Active Directory databases for better integration with a mixed-platform infrastructure. This chapter describes how synchronization works, what kinds of data can be synched, and considerations for the type of information and locations in the directory tree which are best for synchronization. Chapter 9, Designing a Secure Directory Finally, plan how to protect the data in the directory and design the other aspects of the service to meet the security requirements of the users and applications. This chapter covers common security threats, an overview of security methods, the steps involved in analyzing security needs, and tips for designing access controls and protecting the integrity of the directory data. 1.4.2. Deploying the Directory The first step to deploying the Directory Server is installing a test server instance to make sure the service can handle the user load. If the service is not adequate in the initial configuration, adjust the design and test it again. Adjust the design until it is a robust service that you can confidently introduce to the enterprise. For a comprehensive overview of creating and implementing a directory pilot, see Understanding and Deploying LDAP Directory Services (T. Howes, M. Smith, G. Good, Macmillan Technical Publishing, 1999). After creating and tuning a successful test Directory Server instance, develop a plan to move the directory service to production which covers the following considerations: An estimate of the required resources A schedule of what needs to be accomplished and when A set of criteria for measuring the success of the deployment See the Red Hat Directory Server Installation Guide for information on installing the directory service and the Red Hat Directory Server Administration Guide for information on administering and maintaining the directory.
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/introduction_to_directory_services-directory_design_overview
Chapter 2. Ceph Object Gateway administrative API
Chapter 2. Ceph Object Gateway administrative API As a developer, you can administer the Ceph Object Gateway by interacting with the RESTful application programming interface (API). The Ceph Object Gateway makes available the features of the radosgw-admin command in a RESTful API. You can manage users, data, quotas, and usage which you can integrate with other management platforms. Note Red Hat recommends using the command-line interface when configuring the Ceph Object Gateway. The administrative API provides the following functionality: Authentication Requests User Account Management Administrative User Getting User Information Creating Modifying Removing Creating Subuser Modifying Subuser Removing Subuser User Capabilities Management Adding Removing Key Management Creating Removing Bucket Management Getting Bucket Information Checking Index Removing Linking Unlinking Policy Object Management Removing Policy Quota Management Getting User Setting User Getting Bucket Setting Bucket Getting Usage Information Removing Usage Information Standard Error Responses Prerequisites A running Red Hat Ceph Storage cluster. A RESTful client. 2.1. Administration operations An administrative Application Programming Interface (API) request will be done on a URI that starts with the configurable 'admin' resource entry point. Authorization for the administrative API duplicates the S3 authorization mechanism. Some operations require that the user holds special administrative capabilities. The response entity type, either XML or JSON, might be specified as the 'format' option in the request and defaults to JSON if not specified. Example 2.2. Administration authentication requests Amazon's S3 service uses the access key and a hash of the request header and the secret key to authenticate the request. It has the benefit of providing an authenticated request, especially large uploads, without SSL overhead. Most use cases for the S3 API involve using open-source S3 clients such as the AmazonS3Client in the Amazon SDK for Java or Python Boto. These libraries do not support the Ceph Object Gateway Admin API. You can subclass and extend these libraries to support the Ceph Admin API. Alternatively, you can create a unique Gateway client. Creating an execute() method The CephAdminAPI example class in this section illustrates how to create an execute() method that can take request parameters, authenticate the request, call the Ceph Admin API and receive a response. The CephAdminAPI class example is not supported or intended for commercial use. It is for illustrative purposes only. Calling the Ceph Object Gateway The client code contains five calls to the Ceph Object Gateway to demonstrate CRUD operations: Create a User Get a User Modify a User Create a Subuser Delete a User To use this example, get the httpcomponents-client-4.5.3 Apache HTTP components. You can download it for example here: http://hc.apache.org/downloads.cgi . Then unzip the tar file, navigate to its lib directory and copy the contents to the /jre/lib/ext directory of the JAVA_HOME directory, or a custom classpath. As you examine the CephAdminAPI class example, notice that the execute() method takes an HTTP method, a request path, an optional subresource, null if not specified, and a map of parameters. To execute with subresources, for example, subuser , and key , you will need to specify the subresource as an argument in the execute() method. The example method: Builds a URI. Builds an HTTP header string. Instantiates an HTTP request, for example, PUT , POST , GET , DELETE . Adds the Date header to the HTTP header string and the request header. Adds the Authorization header to the HTTP request header. Instantiates an HTTP client and passes it the instantiated HTTP request. Makes a request. Returns a response. Building the header string Building the header string is the portion of the process that involves Amazon's S3 authentication procedure. Specifically, the example method does the following: Adds a request type, for example, PUT , POST , GET , DELETE . Adds the date. Adds the requestPath. The request type should be uppercase with no leading or trailing white space. If you do not trim white space, authentication will fail. The date MUST be expressed in GMT, or authentication will fail. The exemplary method does not have any other headers. The Amazon S3 authentication procedure sorts x-amz headers lexicographically. So if you are adding x-amz headers, be sure to add them lexicographically. Once you have built the header string, the step is to instantiate an HTTP request and pass it the URI. The exemplary method uses PUT for creating a user and subuser, GET for getting a user, POST for modifying a user and DELETE for deleting a user. Once you instantiate a request, add the Date header followed by the Authorization header. Amazon's S3 authentication uses the standard Authorization header, and has the following structure: The CephAdminAPI example class has a base64Sha1Hmac() method, which takes the header string and the secret key for the admin user, and returns a SHA1 HMAC as a base-64 encoded string. Each execute() call will invoke the same line of code to build the Authorization header: The following CephAdminAPI example class requires you to pass the access key, secret key, and an endpoint to the constructor. The class provides accessor methods to change them at runtime. Example The subsequent CephAdminAPIClient example illustrates how to instantiate the CephAdminAPI class, build a map of request parameters, and use the execute() method to create, get, update and delete a user. Example Additional Resources See the S3 Authentication section in the Red Hat Ceph Storage Developer Guide for additional details. For a more extensive explanation of the Amazon S3 authentication procedure, consult the Signing and Authenticating REST Requests section of Amazon Simple Storage Service documentation. 2.3. Creating an administrative user Important To run the radosgw-admin command from the Ceph Object Gateway node, ensure the node has the admin key. The admin key can be copied from any Ceph Monitor node. Prerequisites Root-level access to the Ceph Object Gateway node. Procedure Create an object gateway user: Syntax Example The radosgw-admin command-line interface will return the user. Example output Assign administrative capabilities to the user you create: Syntax Example The radosgw-admin command-line interface will return the user. The "caps": will have the capabilities you assigned to the user: Example output Now you have a user with administrative privileges. 2.4. Get user information Get the user's information. Cap users or user-info-without-keys must be set to read to run this operation. If cap user-info-without-keys is set to read or * , S3 keys and Swift keys will not be included in the response unless the user running this operation is the system user, an admin user, or the cap users is set to read . Capabilities Syntax Request Parameters uid Description The user for which the information is requested. Type String Example foo_user Required Yes access-key Description The S3 access key of the user for which the information is requested. Type String Example ABCD0EF12GHIJ2K34LMN Required No Response Entities user Description A container for the user data information. Type Container Parent N/A user_id Description The user ID. Type String Parent user display_name Description Display name for the user. Type String Parent user suspended Description True if the user is suspended. Type Boolean Parent user max_buckets Description The maximum number of buckets to be owned by the user. Type Integer Parent user subusers Description Subusers associated with this user account. Type Container Parent user keys Description S3 keys associated with this user account. Type Container Parent user swift_keys Description Swift keys associated with this user account. Type Container Parent user caps Description User capabilities. Type Container Parent user If successful, the response contains the user information. Special Error Responses None. 2.5. Create a user Create a new user. By default, an S3 key pair will be created automatically and returned in the response. If only a access-key or secret-key is provided, the omitted key will be automatically generated. By default, a generated key is added to the keyring without replacing an existing key pair. If access-key is specified and refers to an existing key owned by the user then it will be modified. Capabilities Syntax Request Parameters uid Description The user ID to be created. Type String Example foo_user Required Yes display-name Description The display name of the user to be created. Type String Example foo_user Required Yes email Description The email address associated with the user. Type String Example [email protected] Required No key-type Description Key type to be generated, options are: swift, s3 (default). Type String Example s3 [ s3 ] Required No access-key Description Specify access key. Type String Example ABCD0EF12GHIJ2K34LMN Required No secret-key Description Specify secret key. Type String Example 0AbCDEFg1h2i34JklM5nop6QrSTUV+WxyzaBC7D8 Required No user-caps Description User capabilities. Type String Example usage=read, write; users=read Required No generate-key Description Generate a new key pair and add to the existing keyring. Type Boolean Example True [True] Required No max-buckets Description Specify the maximum number of buckets the user can own. Type Integer Example 500 [1000] Required No suspended Description Specify whether the user should be suspended Type Boolean Example False [False] Required No Response Entities user Description Specify whether the user should be suspended Type Boolean Parent No user_id Description The user ID. Type String Parent user display_name Description Display name for the user. Type String Parent user suspended Description True if the user is suspended. Type Boolean Parent user max_buckets Description The maximum number of buckets to be owned by the user. Type Integer Parent user subusers Description Subusers associated with this user account. Type Container Parent user keys Description S3 keys associated with this user account. Type Container Parent user swift_keys Description Swift keys associated with this user account. Type Container Parent user caps Description User capabilities. Type Container Parent If successful, the response contains the user information. Special Error Responses UserExists Description Attempt to create existing user. Code 409 Conflict InvalidAccessKey Description Invalid access key specified. Code 400 Bad Request InvalidKeyType Description Invalid key type specified. Code 400 Bad Request InvalidSecretKey Description Invalid secret key specified. Code 400 Bad Request KeyExists Description Provided access key exists and belongs to another user. Code 409 Conflict EmailExists Description Provided email address exists. Code 409 Conflict InvalidCap Description Attempt to grant invalid admin capability. Code 400 Bad Request Additional Resources See the Red Hat Ceph Storage Developer Guide for creating subusers. 2.6. Modify a user Modify an existing user. Capabilities Syntax Request Parameters uid Description The user ID to be created. Type String Example foo_user Required Yes display-name Description The display name of the user to be created. Type String Example foo_user Required Yes email Description The email address associated with the user. Type String Example [email protected] Required No generate-key Description Generate a new key pair and add to the existing keyring. Type Boolean Example True [False] Required No access-key Description Specify access key. Type String Example ABCD0EF12GHIJ2K34LMN Required No secret-key Description Specify secret key. Type String Example 0AbCDEFg1h2i34JklM5nop6QrSTUV+WxyzaBC7D8 Required No key-type Description Key type to be generated, options are: swift, s3 (default). Type String Example s3 Required No user-caps Description User capabilities. Type String Example usage=read, write; users=read Required No max-buckets Description Specify the maximum number of buckets the user can own. Type Integer Example 500 [1000] Required No suspended Description Specify whether the user should be suspended Type Boolean Example False [False] Required No Response Entities user Description Specify whether the user should be suspended Type Boolean Parent No user_id Description The user ID. Type String Parent user display_name Description Display name for the user. Type String Parent user suspended Description True if the user is suspended. Type Boolean Parent user max_buckets Description The maximum number of buckets to be owned by the user. Type Integer Parent user subusers Description Subusers associated with this user account. Type Container Parent user keys Description S3 keys associated with this user account. Type Container Parent user swift_keys Description Swift keys associated with this user account. Type Container Parent user caps Description User capabilities. Type Container Parent If successful, the response contains the user information. Special Error Responses InvalidAccessKey Description Invalid access key specified. Code 400 Bad Request InvalidKeyType Description Invalid key type specified. Code 400 Bad Request InvalidSecretKey Description Invalid secret key specified. Code 400 Bad Request KeyExists Description Provided access key exists and belongs to another user. Code 409 Conflict EmailExists Description Provided email address exists. Code 409 Conflict InvalidCap Description Attempt to grant invalid admin capability. Code 400 Bad Request Additional Resources See the Red Hat Ceph Storage Developer Guide for modifying subusers. 2.7. Remove a user Remove an existing user. Capabilities Syntax Request Parameters uid Description The user ID to be removed. Type String Example foo_user Required Yes purge-data Description When specified the buckets and objects belonging to the user will also be removed. Type Boolean Example True Required No Response Entities None. Special Error Responses None. Additional Resources See Red Hat Ceph Storage Developer Guide for removing subusers. 2.8. Create a subuser Create a new subuser, primarily useful for clients using the Swift API. Note Either gen-subuser or subuser is required for a valid request. In general, for a subuser to be useful, it must be granted permissions by specifying access . As with user creation if subuser is specified without secret , then a secret key is automatically generated. Capabilities Syntax Request Parameters uid Description The user ID under which a subuser is to be created. Type String Example foo_user Required Yes subuser Description Specify the subuser ID to be created. Type String Example sub_foo Required Yes (or gen-subuser ) gen-subuser Description Specify the subuser ID to be created. Type String Example sub_foo Required Yes (or gen-subuser ) secret-key Description Specify secret key. Type String Example 0AbCDEFg1h2i34JklM5nop6QrSTUV+WxyzaBC7D8 Required No key-type Description Key type to be generated, options are: swift (default), s3. Type String Example swift [ swift ] Required No access Description Set access permissions for sub-user, should be one of read, write, readwrite, full . Type String Example read Required No generate-secret Description Generate the secret key. Type Boolean Example True [False] Required No Response Entities subusers Description Subusers associated with the user account. Type Container Parent N/A permissions Description Subuser access to user account. Type String Parent subusers If successful, the response contains the subuser information. Special Error Responses SubuserExists Description Specified subuser exists. Code 409 Conflict InvalidKeyType Description Invalid key type specified. Code 400 Bad Request InvalidSecretKey Description Invalid secret key specified. Code 400 Bad Request InvalidAccess Description Invalid subuser access specified Code 400 Bad Request 2.9. Modify a subuser Modify an existing subuser. Capabilities Syntax Request Parameters uid Description The user ID under which a subuser is to be created. Type String Example foo_user Required Yes subuser Description The subuser ID to be modified. Type String Example sub_foo Required generate-secret Description Generate a new secret key for the subuser, replacing the existing key. Type Boolean Example True [False] Required No secret Description Specify secret key. Type String Example 0AbCDEFg1h2i34JklM5nop6QrSTUV+WxyzaBC7D8 Required No key-type Description Key type to be generated, options are: swift (default), s3. Type String Example swift [ swift ] Required No access Description Set access permissions for sub-user, should be one of read, write, readwrite, full . Type String Example read Required No Response Entities subusers Description Subusers associated with the user account. Type Container Parent N/A id Description Subuser ID Type String Parent subusers permissions Description Subuser access to user account. Type String Parent subusers If successful, the response contains the subuser information. Special Error Responses InvalidKeyType Description Invalid key type specified. Code 400 Bad Request InvalidSecretKey Description Invalid secret key specified. Code 400 Bad Request InvalidAccess Description Invalid subuser access specified Code 400 Bad Request 2.10. Remove a subuser Remove an existing subuser. Capabilities Syntax Request Parameters uid Description The user ID to be removed. Type String Example foo_user Required Yes subuser Description The subuser ID to be removed. Type String Example sub_foo Required Yes purge-keys Description Remove keys belonging to the subuser. Type Boolean Example True [True] Required No Response Entities None. Special Error Responses None. 2.11. Add capabilities to a user Add an administrative capability to a specified user. Capabilities Syntax Request Parameters uid Description The user ID to add an administrative capability to. Type String Example foo_user Required Yes user-caps Description The administrative capability to add to the user. Type String Example usage=read, write Required Yes Response Entities user Description A container for the user data information. Type Container Parent N/A user_id Description The user ID Type String Parent user caps Description User capabilities, Type Container Parent user If successful, the response contains the user's capabilities. Special Error Responses InvalidCap Description Attempt to grant invalid admin capability. Code 400 Bad Request 2.12. Remove capabilities from a user Remove an administrative capability from a specified user. Capabilities Syntax Request Parameters uid Description The user ID to remove an administrative capability from. Type String Example foo_user Required Yes user-caps Description The administrative capabilities to remove from the user. Type String Example usage=read, write Required Yes Response Entities user Description A container for the user data information. Type Container Parent N/A user_id Description The user ID. Type String Parent user caps Description User capabilities. Type Container Parent user If successful, the response contains the user's capabilities. Special Error Responses InvalidCap Description Attempt to remove an invalid admin capability. Code 400 Bad Request NoSuchCap Description User does not possess specified capability. Code 404 Not Found 2.13. Create a key Create a new key. If a subuser is specified then by default created keys will be swift type. If only one of access-key or secret-key is provided the committed key will be automatically generated, that is if only secret-key is specified then access-key will be automatically generated. By default, a generated key is added to the keyring without replacing an existing key pair. If access-key is specified and refers to an existing key owned by the user then it will be modified. The response is a container listing all keys of the same type as the key created. Note When creating a swift key, specifying the option access-key will have no effect. Additionally, only one swift key might be held by each user or subuser. Capabilities Syntax Request Parameters uid Description The user ID to receive the new key. Type String Example foo_user Required Yes subuser Description The subuser ID to receive the new key. Type String Example sub_foo Required No key-type Description Key type to be generated, options are: swift, s3 (default). Type String Example s3 [ s3 ] Required No access-key Description Specify access key. Type String Example AB01C2D3EF45G6H7IJ8K Required No secret-key Description Specify secret key. Type String Example 0ab/CdeFGhij1klmnopqRSTUv1WxyZabcDEFgHij Required No generate-key Description Generate a new key pair and add to the existing keyring. Type Boolean Example True [ True ] Required No Response Entities keys Description Keys of type created associated with this user account. Type Container Parent N/A user Description The user account associated with the key. Type String Parent keys access-key Description The access key. Type String Parent keys secret-key Description The secret key. Type String Parent keys Special Error Responses InvalidAccessKey Description Invalid access key specified. Code 400 Bad Request InvalidKeyType Description Invalid key type specified. Code 400 Bad Request InvalidSecretKey Description Invalid secret key specified. Code 400 Bad Request InvalidKeyType Description Invalid key type specified. Code 400 Bad Request KeyExists Description Provided access key exists and belongs to another user. Code 409 Conflict 2.14. Remove a key Remove an existing key. Capabilities Syntax Request Parameters access-key Description The S3 access key belonging to the S3 key pair to remove. Type String Example AB01C2D3EF45G6H7IJ8K Required Yes uid Description The user to remove the key from. Type String Example foo_user Required No subuser Description The subuser to remove the key from. Type String Example sub_foo Required No key-type Description Key type to be removed, options are: swift, s3. Note Required to remove swift key. Type String Example swift Required No Special Error Responses None. Response Entities None. 2.15. Bucket notifications As a storage administrator, you can use these APIs to provide configuration and control interfaces for the bucket notification mechanism. The API topics are named objects that contain the definition of a specific endpoint. Bucket notifications associate topics with a specific bucket. The S3 bucket operations section gives more details on bucket notifications. Note In all topic actions, the parameters are URL encoded, and sent in the message body using application/x-www-form-urlencoded content type. Note Any bucket notification already associated with the topic needs to be re-created for the topic update to take effect. Prerequisites Create bucket notifications on the Ceph Object Gateway. 2.15.1. Overview of bucket notifications Bucket notifications provide a way to send information out of the Ceph Object Gateway when certain events happen in the bucket. Bucket notifications can be sent to HTTP, AMQP0.9.1, and Kafka endpoints. A notification entry must be created to send bucket notifications for events on a specific bucket and to a specific topic. A bucket notification can be created on a subset of event types or by default for all event types. The bucket notification can filter out events based on key prefix or suffix, regular expression matching the keys, and the metadata attributes attached to the object, or the object tags. Bucket notifications have a REST API to provide configuration and control interfaces for the bucket notification mechanism. Sending a bucket notification when an object is synced to a zone lets the external system get information into the zone syncing status at the object level. The bucket notification event types s3:ObjectSynced:* and s3:ObjectSynced:Created , when configured via the bucket notification mechanism, send a notification event from the synced RGW upon successful sync of an object. Both the topics and the notification configuration should be done separately in each zone from which the notification events are being sent. 2.15.2. Persistent notifications Persistent notifications enable reliable and asynchronous delivery of notifications from the Ceph Object Gateway to the endpoint configured at the topic. Regular notifications are also reliable because the delivery to the endpoint is performed synchronously during the request. With persistent notifications, the Ceph Object Gateway retries sending notifications even when the endpoint is down or there are network issues during the operations, that is notifications are retried if not successfully delivered to the endpoint. Notifications are sent only after all other actions related to the notified operation are successful. If an endpoint goes down for a longer duration, the notification queue fills up and the S3 operations that have configured notifications for these endpoints will fail. Note With kafka-ack-level=none , there is no indication for message failures, and therefore messages sent while broker is down are not retried, when the broker is up again. After the broker is up again, only new notifications are seen. 2.15.3. Creating a topic You can create topics before creating bucket notifications. A topic is a Simple Notification Service (SNS) entity and all the topic operations, that is, create , delete , list , and get , are SNS operations. The topic needs to have endpoint parameters that are used when a bucket notification is created. Once the request is successful, the response includes the topic Amazon Resource Name (ARN) that can be used later to reference this topic in the bucket notification request. Note A topic_arn provides the bucket notification configuration and is generated after a topic is created. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access. Installation of the Ceph Object Gateway. User access key and secret key. Endpoint parameters. Procedure Create a topic with the following request format: Syntax Here are the request parameters: Endpoint : URL of an endpoint to send notifications to. OpaqueData : opaque data is set in the topic configuration and added to all notifications triggered by the topic. persistent : indication of whether notifications to this endpoint are persistent that is asynchronous or not. By default the value is false . HTTP endpoint: URL : https:// FQDN : PORT port defaults to : Use 80/443 for HTTP[S] accordingly. verify-ssl : Indicates whether the server certificate is validated by the client or not. By default , it is true . AMQP0.9.1 endpoint: URL : amqp:// USER : PASSWORD @ FQDN : PORT [/ VHOST ]. User and password defaults to: guest and guest respectively. User and password details should be provided over HTTPS, otherwise the topic creation request is rejected. port defaults to : 5672. vhost defaults to: "/" amqp-exchange : The exchanges must exist and be able to route messages based on topics. This is a mandatory parameter for AMQP0.9.1. Different topics pointing to the same endpoint must use the same exchange. amqp-ack-level : No end to end acknowledgment is required, as messages may persist in the broker before being delivered into their final destination. Three acknowledgment methods exist: none : Message is considered delivered if sent to the broker. broker : By default, the message is considered delivered if acknowledged by the broker. routable : Message is considered delivered if the broker can route to a consumer. Note The key and value of a specific parameter do not have to reside in the same line, or in any specific order, but must use the same index. Attribute indexing does not need to be sequential or start from any specific value. Note The topic-name is used for the AMQP topic. Kafka endpoint: URL : kafka:// USER : PASSWORD @ FQDN : PORT . use-ssl is set to false by default. If use-ssl is set to true , secure connection is used for connecting with the broker. If ca-location is provided, and secure connection is used, the specified CA will be used, instead of the default one, to authenticate the broker. User and password can only be provided over HTTP[S]. Otherwise, the topic creation request is rejected. User and password may only be provided together with use-ssl , otherwise, the connection to the broker will fail. port defaults to : 9092. kafka-ack-level : no end to end acknowledgment required, as messages may persist in the broker before being delivered into their final destination. Two acknowledgment methods exist: none : message is considered delivered if sent to the broker. broker : By default, the message is considered delivered if acknowledged by the broker. The following is an example of the response format: Example Note The topic Amazon Resource Name (ARN) in the response will have the following format: arn:aws:sns: ZONE_GROUP : TENANT : TOPIC The following is an example of AMQP0.9.1 endpoint: Example 2.15.4. Getting topic information Returns information about a specific topic. This can include endpoint information if it is provided. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access. Installation of the Ceph Object Gateway. User access key and secret key. Endpoint parameters. Procedure Get topic information with the following request format: Syntax Here is an example of the response format: The following are the tags and definitions: User : Name of the user that created the topic. Name : Name of the topic. JSON formatted endpoints include: EndpointAddress : The endpoint URL. If the endpoint URL contains user and password information, the request must be made over HTTPS. Otheriwse, the topic get request is rejected. EndPointArgs : The endpoint arguments. EndpointTopic : The topic name that is be sent to the endpoint can be different than the above example topic name. HasStoredSecret : true when the endpoint URL contains user and password information. Persistent : true when the topic is persistent. TopicArn : Topic ARN. OpaqueData : This is an opaque data set on the topic. 2.15.5. Listing topics List the topics that the user has defined. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access. Installation of the Ceph Object Gateway. User access key and secret key. Endpoint parameters. Procedure List topic information with the following request format: Syntax Here is an example of the response format: Note If endpoint URL contains user and password information, in any of the topics, the request must be made over HTTPS. Otherwise, the topic list request is rejected. 2.15.6. Deleting topics Removing a deleted topic results in no operation and is not a failure. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access. Installation of the Ceph Object Gateway. User access key and secret key. Endpoint parameters. Procedure Delete a topic with the following request format: Syntax Here is an example of the response format: 2.15.7. Using the command-line interface for topic management You can list, get, and remove topics using the command-line interface. Prerequisites Root-level access to the Ceph Object Gateway node. Procedure To get a list of all topics of a user: Syntax Example To get configuration of a specific topic: Syntax Example To remove a specific topic: Syntax Example 2.15.8. Managing notification configuration You can list, get, and remove notification configuration of buckets using the command-line interface. Prerequisites A running Red Hat Ceph Storage cluster. A Ceph Object Gateway configured. Procedure List all the bucket notification configuration: Syntax Example Get the bucket notification configuration: Syntax Example Remove a specific bucket notification configuration: Syntax Here, NOTIFICATION_ID is optional. If it is not specified, the command removes all the notification configurations of that bucket. Example 2.15.9. Event record An event holds information about the operation done by the Ceph Object Gateway and is sent as a payload over the chosen endpoint, such as HTTP, HTTPS, Kafka, or AMQ0.9.1. The event record is in JSON format. The following ObjectLifecycle:Expiration events are supported: ObjectLifecycle:Expiration:Current ObjectLifecycle:Expiration:NonCurrent ObjectLifecycle:Expiration:DeleteMarker ObjectLifecycle:Expiration:AbortMultipartUpload Example These are the event record keys and their definitions: awsRegion : Zonegroup. eventTime : Timestamp that indicates when the event was triggered. eventName : The type of the event. It can be ObjectCreated , ObjectRemoved , or ObjectLifecycle:Expiration userIdentity.principalId : The identity of the user that triggered the event. requestParameters.sourceIPAddress : The IP address of the client that triggered the event. This field is not supported. responseElements.x-amz-request-id : The request ID that triggered the event. responseElements.x_amz_id_2 : The identity of the Ceph Object Gateway on which the event was triggered. The identity format is RGWID - ZONE - ZONEGROUP . s3.configurationId : The notification ID that created the event. s3.bucket.name : The name of the bucket. s3.bucket.ownerIdentity.principalId : The owner of the bucket. s3.bucket.arn : Amazon Resource Name (ARN) of the bucket. s3.bucket.id : Identity of the bucket. s3.object.key : The object key. s3.object.size : The size of the object. s3.object.eTag : The object etag. s3.object.version : The object version in a versioned bucket. s3.object.sequencer : Monotonically increasing identifier of the change per object in the hexadecimal format. s3.object.metadata : Any metadata set on the object sent as x-amz-meta . s3.object.tags : Any tags set on the object. s3.eventId : Unique identity of the event. s3.opaqueData : Opaque data is set in the topic configuration and added to all notifications triggered by the topic. Additional Resources See the Event Message Structure for more information. 2.15.10. Supported event types The following event types are supported: * s3:ObjectCreated:* * s3:ObjectCreated:Put * s3:ObjectCreated:Post * s3:ObjectCreated:Copy * s3:ObjectCreated:CompleteMultipartUpload NOTE: In multipart upload, an ObjectCreated:CompleteMultipartUpload notification is sent at the end of the process. * s3:ObjectRemoved:* * s3:ObjectRemoved:Delete * s3:ObjectRemoved:DeleteMarkerCreated * s3:ObjectLifecycle:Expiration:Current * s3:ObjectLifecycle:Expiration:NonCurrent * s3:ObjectLifecycle:Expiration:DeleteMarker * s3:ObjectLifecycle:Expiration:AbortMultipartUpload * s3:ObjectLifecycle:Transition:Current * s3:ObjectLifecycle:Transition:NonCurrent * s3:ObjectSynced:Create 2.15.11. Get bucket information Get information about a subset of the existing buckets. If uid is specified without bucket then all buckets belonging to the user will be returned. If bucket alone is specified, information for that particular bucket will be retrieved. Capabilities Syntax Request Parameters bucket Description The bucket to return info on. Type String Example foo_bucket Required No uid Description The user to retrieve bucket information for. Type String Example foo_user Required No stats Description Return bucket statistics. Type Boolean Example True [False] Required No Response Entities stats Description Per bucket information. Type Container Parent N/A buckets Description Contains a list of one or more bucket containers. Type Container Parent buckets bucket Description Container for single bucket information. Type Container Parent buckets name Description The name of the bucket. Type String Parent bucket pool Description The pool the bucket is stored in. Type String Parent bucket id Description The unique bucket ID. Type String Parent bucket marker Description Internal bucket tag. Type String Parent bucket owner Description The user ID of the bucket owner. Type String Parent bucket usage Description Storage usage information. Type Container Parent bucket index Description Status of bucket index. Type String Parent bucket If successful, then the request returns a bucket's container with the bucket information. Special Error Responses IndexRepairFailed Description Bucket index repair failed. Code 409 Conflict 2.15.12. Check a bucket index Check the index of an existing bucket. Note To check multipart object accounting with check-objects , fix must be set to True. Capabilities buckets=write Syntax Request Parameters bucket Description The bucket to return info on. Type String Example foo_bucket Required Yes check-objects Description Check multipart object accounting. Type Boolean Example True [False] Required No fix Description Also fix the bucket index when checking. Type Boolean Example False [False] Required No Response Entities index Description Status of bucket index. Type String Special Error Responses IndexRepairFailed Description Bucket index repair failed. Code 409 Conflict 2.15.13. Remove a bucket Removes an existing bucket. Capabilities Syntax Request Parameters bucket Description The bucket to remove. Type String Example foo_bucket Required Yes purge-objects Description Remove a bucket's objects before deletion. Type Boolean Example True [False] Required No Response Entities None. Special Error Responses BucketNotEmpty Description Attempted to delete non-empty bucket. Code 409 Conflict ObjectRemovalFailed Description Unable to remove objects. Code 409 Conflict 2.15.14. Link a bucket Link a bucket to a specified user, unlinking the bucket from any user. Capabilities Syntax Request Parameters bucket Description The bucket to unlink. Type String Example foo_bucket Required Yes uid Description The user ID to link the bucket to. Type String Example foo_user Required Yes Response Entities bucket Description Container for single bucket information. Type Container Parent N/A name Description The name of the bucket. Type String Parent bucket pool Description The pool the bucket is stored in. Type String Parent bucket id Description The unique bucket ID. Type String Parent bucket marker Description Internal bucket tag. Type String Parent bucket owner Description The user ID of the bucket owner. Type String Parent bucket usage Description Storage usage information. Type Container Parent bucket index Description Status of bucket index. Type String Parent bucket Special Error Responses BucketUnlinkFailed Description Unable to unlink bucket from specified user. Code 409 Conflict BucketLinkFailed Description Unable to link bucket to specified user. Code 409 Conflict 2.15.15. Unlink a bucket Unlink a bucket from a specified user. Primarily useful for changing bucket ownership. Capabilities Syntax Request Parameters bucket Description The bucket to unlink. Type String Example foo_bucket Required Yes uid Description The user ID to link the bucket to. Type String Example foo_user Required Yes Response Entities None. Special Error Responses BucketUnlinkFailed Description Unable to unlink bucket from specified user. Type 409 Conflict 2.15.16. Get a bucket or object policy Read the policy of an object or bucket. Capabilities Syntax Request Parameters bucket Description The bucket to read the policy from. Type String Example foo_bucket Required Yes object Description The object to read the policy from. Type String Example foo.txt Required No Response Entities policy Description Access control policy. Type Container Parent N/A If successful, returns the object or bucket policy Special Error Responses IncompleteBody Description Either bucket was not specified for a bucket policy request or bucket and object were not specified for an object policy request. Code 400 Bad Request 2.15.17. Remove an object Remove an existing object. Note Does not require owner to be non-suspended. Capabilities Syntax Request Parameters bucket Description The bucket containing the object to be removed. Type String Example foo_bucket Required Yes object Description The object to remove Type String Example foo.txt Required Yes Response Entities None. Special Error Responses NoSuchObject Description Specified object does not exist. Code 404 Not Found ObjectRemovalFailed Description Unable to remove objects. Code 409 Conflict 2.15.18. Quotas The administrative Operations API enables you to set quotas on users and on buckets owned by users. Quotas include the maximum number of objects in a bucket and the maximum storage size in megabytes. To view quotas, the user must have a users=read capability. To set, modify or disable a quota, the user must have users=write capability. Valid parameters for quotas include: Bucket: The bucket option allows you to specify a quota for buckets owned by a user. Maximum Objects: The max-objects setting allows you to specify the maximum number of objects. A negative value disables this setting. Maximum Size: The max-size option allows you to specify a quota for the maximum number of bytes. A negative value disables this setting. Quota Scope: The quota-scope option sets the scope for the quota. The options are bucket and user . 2.15.19. Get a user quota To get a quota, the user must have users capability set with read permission. Syntax 2.15.20. Set a user quota To set a quota, the user must have users capability set with write permission. Syntax The content must include a JSON representation of the quota settings as encoded in the corresponding read operation. 2.15.21. Get a bucket quota Get information about a subset of the existing buckets. If uid is specified without bucket then all buckets belonging to the user will be returned. If bucket alone is specified, information for that particular bucket will be retrieved. Capabilities Syntax Request Parameters bucket Description The bucket to return info on. Type String Example foo_bucket Required No uid Description The user to retrieve bucket information for. Type String Example foo_user Required No stats Description Return bucket statistics. Type Boolean Example True [False] Required No Response Entities stats Description Per bucket information. Type Container Parent N/A buckets Description Contains a list of one or more bucket containers. Type Container Parent N/A bucket Description Container for single bucket information. Type Container Parent buckets name Description The name of the bucket. Type String Parent bucket pool Description The pool the bucket is stored in. Type String Parent bucket id Description The unique bucket ID. Type String Parent bucket marker Description Internal bucket tag. Type String Parent bucket owner Description The user ID of the bucket owner. Type String Parent bucket usage Description Storage usage information. Type Container Parent bucket index Description Status of bucket index. Type String Parent bucket If successful, then the request returns a bucket's container with the bucket information. Special Error Responses IndexRepairFailed Description Bucket index repair failed. Code 409 Conflict 2.15.22. Set a bucket quota To set a quota, the user must have users capability set with write permission. Syntax The content must include a JSON representation of the quota settings as encoded in the corresponding read operation. 2.15.23. Get usage information Requesting bandwidth usage information. Capabilities Syntax Request Parameters uid Description The user for which the information is requested. Type String Required Yes start Description The date, and optionally, the time of when the data request started. For example, 2012-09-25 16:00:00 . Type String Required No end Description The date, and optionally, the time of when the data request ended. For example, 2012-09-25 16:00:00 . Type String Required No show-entries Description Specifies whether data entries should be returned. Type Boolean Required No show-summary Description Specifies whether data entries should be returned. Type Boolean Required No Response Entities usage Description A container for the usage information. Type Container entries Description A container for the usage entries information. Type Container user Description A container for the user data information. Type Container owner Description The name of the user that owns the buckets. Type String bucket Description The bucket name. Type String time Description Time lower bound for which data is being specified that is rounded to the beginning of the first relevant hour. Type String epoch Description The time specified in seconds since 1/1/1970 . Type String categories Description A container for stats categories. Type Container entry Description A container for stats entry. Type Container category Description Name of request category for which the stats are provided. Type String bytes_sent Description Number of bytes sent by the Ceph Object Gateway. Type Integer bytes_received Description Number of bytes received by the Ceph Object Gateway. Type Integer ops Description Number of operations. Type Integer successful_ops Description Number of successful operations. Type Integer summary Description Number of successful operations. Type Container total Description A container for stats summary aggregated total. Type Container If successful, the response contains the requested information. 2.15.24. Remove usage information Remove usage information. With no dates specified, removes all usage information. Capabilities Syntax Request Parameters uid Description The user for which the information is requested. Type String Example foo_user Required Yes start Description The date, and optionally, the time of when the data request started. For example, 2012-09-25 16:00:00 . Type String Example 2012-09-25 16:00:00 Required No end Description The date, and optionally, the time of when the data request ended. For example, 2012-09-25 16:00:00 . Type String Example 2012-09-25 16:00:00 Required No remove-all Description Required when uid is not specified, in order to acknowledge multi-user data removal. Type Boolean Example True [False] Required No 2.15.25. Standard error responses The following list details standard error responses and their descriptions. AccessDenied Description Access denied. Code 403 Forbidden InternalError Description Internal server error. Code 500 Internal Server Error NoSuchUser Description User does not exist. Code 404 Not Found NoSuchBucket Description Bucket does not exist. Code 404 Not Found NoSuchKey Description No such access key. Code 404 Not Found
[ "PUT /admin/user?caps&format=json HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME Content-Type: text/plain Authorization: AUTHORIZATION_TOKEN usage=read", "Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "httpRequest.addHeader(\"Authorization\", \"AWS \" + this.getAccessKey() + \":\" + base64Sha1Hmac(headerString.toString(), this.getSecretKey()));", "import java.io.IOException; import java.net.URI; import java.net.URISyntaxException; import java.time.OffsetDateTime; import java.time.format.DateTimeFormatter; import java.time.ZoneId; import org.apache.http.HttpEntity; import org.apache.http.NameValuePair; import org.apache.http.Header; import org.apache.http.client.entity.UrlEncodedFormEntity; import org.apache.http.client.methods.CloseableHttpResponse; import org.apache.http.client.methods.HttpRequestBase; import org.apache.http.client.methods.HttpGet; import org.apache.http.client.methods.HttpPost; import org.apache.http.client.methods.HttpPut; import org.apache.http.client.methods.HttpDelete; import org.apache.http.impl.client.CloseableHttpClient; import org.apache.http.impl.client.HttpClients; import org.apache.http.message.BasicNameValuePair; import org.apache.http.util.EntityUtils; import org.apache.http.client.utils.URIBuilder; import java.util.Base64; import java.util.Base64.Encoder; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import javax.crypto.spec.SecretKeySpec; import javax.crypto.Mac; import java.util.Map; import java.util.Iterator; import java.util.Set; import java.util.Map.Entry; public class CephAdminAPI { /* * Each call must specify an access key, secret key, endpoint and format. */ String accessKey; String secretKey; String endpoint; String scheme = \"http\"; //http only. int port = 80; /* * A constructor that takes an access key, secret key, endpoint and format. */ public CephAdminAPI(String accessKey, String secretKey, String endpoint){ this.accessKey = accessKey; this.secretKey = secretKey; this.endpoint = endpoint; } /* * Accessor methods for access key, secret key, endpoint and format. */ public String getEndpoint(){ return this.endpoint; } public void setEndpoint(String endpoint){ this.endpoint = endpoint; } public String getAccessKey(){ return this.accessKey; } public void setAccessKey(String accessKey){ this.accessKey = accessKey; } public String getSecretKey(){ return this.secretKey; } public void setSecretKey(String secretKey){ this.secretKey = secretKey; } /* * Takes an HTTP Method, a resource and a map of arguments and * returns a CloseableHTTPResponse. */ public CloseableHttpResponse execute(String HTTPMethod, String resource, String subresource, Map arguments) { String httpMethod = HTTPMethod; String requestPath = resource; StringBuffer request = new StringBuffer(); StringBuffer headerString = new StringBuffer(); HttpRequestBase httpRequest; CloseableHttpClient httpclient; URI uri; CloseableHttpResponse httpResponse = null; try { uri = new URIBuilder() .setScheme(this.scheme) .setHost(this.getEndpoint()) .setPath(requestPath) .setPort(this.port) .build(); if (subresource != null){ uri = new URIBuilder(uri) .setCustomQuery(subresource) .build(); } for (Iterator iter = arguments.entrySet().iterator(); iter.hasNext();) { Entry entry = (Entry)iter.next(); uri = new URIBuilder(uri) .setParameter(entry.getKey().toString(), entry.getValue().toString()) .build(); } request.append(uri); headerString.append(HTTPMethod.toUpperCase().trim() + \"\\n\\n\\n\"); OffsetDateTime dateTime = OffsetDateTime.now(ZoneId.of(\"GMT\")); DateTimeFormatter formatter = DateTimeFormatter.RFC_1123_DATE_TIME; String date = dateTime.format(formatter); headerString.append(date + \"\\n\"); headerString.append(requestPath); if (HTTPMethod.equalsIgnoreCase(\"PUT\")){ httpRequest = new HttpPut(uri); } else if (HTTPMethod.equalsIgnoreCase(\"POST\")){ httpRequest = new HttpPost(uri); } else if (HTTPMethod.equalsIgnoreCase(\"GET\")){ httpRequest = new HttpGet(uri); } else if (HTTPMethod.equalsIgnoreCase(\"DELETE\")){ httpRequest = new HttpDelete(uri); } else { System.err.println(\"The HTTP Method must be PUT, POST, GET or DELETE.\"); throw new IOException(); } httpRequest.addHeader(\"Date\", date); httpRequest.addHeader(\"Authorization\", \"AWS \" + this.getAccessKey() + \":\" + base64Sha1Hmac(headerString.toString(), this.getSecretKey())); httpclient = HttpClients.createDefault(); httpResponse = httpclient.execute(httpRequest); } catch (URISyntaxException e){ System.err.println(\"The URI is not formatted properly.\"); e.printStackTrace(); } catch (IOException e){ System.err.println(\"There was an error making the request.\"); e.printStackTrace(); } return httpResponse; } /* * Takes a uri and a secret key and returns a base64-encoded * SHA-1 HMAC. */ public String base64Sha1Hmac(String uri, String secretKey) { try { byte[] keyBytes = secretKey.getBytes(\"UTF-8\"); SecretKeySpec signingKey = new SecretKeySpec(keyBytes, \"HmacSHA1\"); Mac mac = Mac.getInstance(\"HmacSHA1\"); mac.init(signingKey); byte[] rawHmac = mac.doFinal(uri.getBytes(\"UTF-8\")); Encoder base64 = Base64.getEncoder(); return base64.encodeToString(rawHmac); } catch (Exception e) { throw new RuntimeException(e); } } }", "import java.io.IOException; import org.apache.http.client.methods.CloseableHttpResponse; import org.apache.http.HttpEntity; import org.apache.http.util.EntityUtils; import java.util.*; public class CephAdminAPIClient { public static void main (String[] args){ CephAdminAPI adminApi = new CephAdminAPI (\"FFC6ZQ6EMIF64194158N\", \"Xac39eCAhlTGcCAUreuwe1ZuH5oVQFa51lbEMVoT\", \"ceph-client\"); /* * Create a user */ Map requestArgs = new HashMap(); requestArgs.put(\"access\", \"usage=read, write; users=read, write\"); requestArgs.put(\"display-name\", \"New User\"); requestArgs.put(\"email\", \"[email protected]\"); requestArgs.put(\"format\", \"json\"); requestArgs.put(\"uid\", \"new-user\"); CloseableHttpResponse response = adminApi.execute(\"PUT\", \"/admin/user\", null, requestArgs); System.out.println(response.getStatusLine()); HttpEntity entity = response.getEntity(); try { System.out.println(\"\\nResponse Content is: \" + EntityUtils.toString(entity, \"UTF-8\") + \"\\n\"); response.close(); } catch (IOException e){ System.err.println (\"Encountered an I/O exception.\"); e.printStackTrace(); } /* * Get a user */ requestArgs = new HashMap(); requestArgs.put(\"format\", \"json\"); requestArgs.put(\"uid\", \"new-user\"); response = adminApi.execute(\"GET\", \"/admin/user\", null, requestArgs); System.out.println(response.getStatusLine()); entity = response.getEntity(); try { System.out.println(\"\\nResponse Content is: \" + EntityUtils.toString(entity, \"UTF-8\") + \"\\n\"); response.close(); } catch (IOException e){ System.err.println (\"Encountered an I/O exception.\"); e.printStackTrace(); } /* * Modify a user */ requestArgs = new HashMap(); requestArgs.put(\"display-name\", \"John Doe\"); requestArgs.put(\"email\", \"[email protected]\"); requestArgs.put(\"format\", \"json\"); requestArgs.put(\"uid\", \"new-user\"); requestArgs.put(\"max-buckets\", \"100\"); response = adminApi.execute(\"POST\", \"/admin/user\", null, requestArgs); System.out.println(response.getStatusLine()); entity = response.getEntity(); try { System.out.println(\"\\nResponse Content is: \" + EntityUtils.toString(entity, \"UTF-8\") + \"\\n\"); response.close(); } catch (IOException e){ System.err.println (\"Encountered an I/O exception.\"); e.printStackTrace(); } /* * Create a subuser */ requestArgs = new HashMap(); requestArgs.put(\"format\", \"json\"); requestArgs.put(\"uid\", \"new-user\"); requestArgs.put(\"subuser\", \"foobar\"); response = adminApi.execute(\"PUT\", \"/admin/user\", \"subuser\", requestArgs); System.out.println(response.getStatusLine()); entity = response.getEntity(); try { System.out.println(\"\\nResponse Content is: \" + EntityUtils.toString(entity, \"UTF-8\") + \"\\n\"); response.close(); } catch (IOException e){ System.err.println (\"Encountered an I/O exception.\"); e.printStackTrace(); } /* * Delete a user */ requestArgs = new HashMap(); requestArgs.put(\"format\", \"json\"); requestArgs.put(\"uid\", \"new-user\"); response = adminApi.execute(\"DELETE\", \"/admin/user\", null, requestArgs); System.out.println(response.getStatusLine()); entity = response.getEntity(); try { System.out.println(\"\\nResponse Content is: \" + EntityUtils.toString(entity, \"UTF-8\") + \"\\n\"); response.close(); } catch (IOException e){ System.err.println (\"Encountered an I/O exception.\"); e.printStackTrace(); } } }", "radosgw-admin user create --uid=\" USER_NAME \" --display-name=\" DISPLAY_NAME \"", "[user@client ~]USD radosgw-admin user create --uid=\"admin-api-user\" --display-name=\"Admin API User\"", "{ \"user_id\": \"admin-api-user\", \"display_name\": \"Admin API User\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [], \"keys\": [ { \"user\": \"admin-api-user\", \"access_key\": \"NRWGT19TWMYOB1YDBV1Y\", \"secret_key\": \"gr1VEGIV7rxcP3xvXDFCo4UDwwl2YoNrmtRlIAty\" } ], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1 }, \"temp_url_keys\": [] }", "radosgw-admin caps add --uid=\" USER_NAME \" --caps=\"users=*\"", "[user@client ~]USD radosgw-admin caps add --uid=admin-api-user --caps=\"users=*\"", "{ \"user_id\": \"admin-api-user\", \"display_name\": \"Admin API User\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [], \"keys\": [ { \"user\": \"admin-api-user\", \"access_key\": \"NRWGT19TWMYOB1YDBV1Y\", \"secret_key\": \"gr1VEGIV7rxcP3xvXDFCo4UDwwl2YoNrmtRlIAty\" } ], \"swift_keys\": [], \"caps\": [ { \"type\": \"users\", \"perm\": \"*\" } ], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1 }, \"temp_url_keys\": [] }", "users=read or user-info-without-keys=read", "GET /admin/user?format=json HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME", "`users=write`", "PUT /admin/user?format=json HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME", "`users=write`", "POST /admin/user?format=json HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME", "`users=write`", "DELETE /admin/user?format=json HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME", "`users=write`", "PUT /admin/user?subuser&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME", "`users=write`", "POST /admin/user?subuser&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME", "`users=write`", "DELETE /admin/user?subuser&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME", "`users=write`", "PUT /admin/user?caps&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME", "`users=write`", "DELETE /admin/user?caps&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME", "`users=write`", "PUT /admin/user?key&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME", "`users=write`", "DELETE /admin/user?key&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME", "POST Action=CreateTopic &Name= TOPIC_NAME [&Attributes.entry.1.key=amqp-exchange&Attributes.entry.1.value= EXCHANGE ] [&Attributes.entry.2.key=amqp-ack-level&Attributes.entry.2.value=none|broker|routable] [&Attributes.entry.3.key=verify-ssl&Attributes.entry.3.value=true|false] [&Attributes.entry.4.key=kafka-ack-level&Attributes.entry.4.value=none|broker] [&Attributes.entry.5.key=use-ssl&Attributes.entry.5.value=true|false] [&Attributes.entry.6.key=ca-location&Attributes.entry.6.value= FILE_PATH ] [&Attributes.entry.7.key=OpaqueData&Attributes.entry.7.value= OPAQUE_DATA ] [&Attributes.entry.8.key=push-endpoint&Attributes.entry.8.value= ENDPOINT ] [&Attributes.entry.9.key=persistent&Attributes.entry.9.value=true|false]", "<CreateTopicResponse xmlns=\"https://sns.amazonaws.com/doc/2010-03-31/\"> <CreateTopicResult> <TopicArn></TopicArn> </CreateTopicResult> <ResponseMetadata> <RequestId></RequestId> </ResponseMetadata> </CreateTopicResponse>", "client.create_topic(Name='my-topic' , Attributes={'push-endpoint': 'amqp://127.0.0.1:5672', 'amqp-exchange': 'ex1', 'amqp-ack-level': 'broker'}) \"", "POST Action=GetTopic &TopicArn= TOPIC_ARN", "<GetTopicResponse> <GetTopicRersult> <Topic> <User></User> <Name></Name> <EndPoint> <EndpointAddress></EndpointAddress> <EndpointArgs></EndpointArgs> <EndpointTopic></EndpointTopic> <HasStoredSecret></HasStoredSecret> <Persistent></Persistent> </EndPoint> <TopicArn></TopicArn> <OpaqueData></OpaqueData> </Topic> </GetTopicResult> <ResponseMetadata> <RequestId></RequestId> </ResponseMetadata> </GetTopicResponse>", "POST Action=ListTopics", "<ListTopicdResponse xmlns=\"https://sns.amazonaws.com/doc/2020-03-31/\"> <ListTopicsRersult> <Topics> <member> <User></User> <Name></Name> <EndPoint> <EndpointAddress></EndpointAddress> <EndpointArgs></EndpointArgs> <EndpointTopic></EndpointTopic> </EndPoint> <TopicArn></TopicArn> <OpaqueData></OpaqueData> </member> </Topics> </ListTopicsResult> <ResponseMetadata> <RequestId></RequestId> </ResponseMetadata> </ListTopicsResponse>", "POST Action=DeleteTopic &TopicArn= TOPIC_ARN", "<DeleteTopicResponse xmlns=\"https://sns.amazonaws.com/doc/2020-03-31/\"> <ResponseMetadata> <RequestId></RequestId> </ResponseMetadata> </DeleteTopicResponse>", "radosgw-admin topic list --uid= USER_ID", "radosgw-admin topic list --uid=example", "radosgw-admin topic get --uid= USER_ID --topic= TOPIC_NAME", "radosgw-admin topic get --uid=example --topic=example-topic", "radosgw-admin topic rm --uid= USER_ID --topic= TOPIC_NAME", "radosgw-admin topic rm --uid=example --topic=example-topic", "radosgw-admin notification list --bucket= BUCKET_NAME", "radosgw-admin notification list --bucket bkt2 { \"notifications\": [ { \"TopicArn\": \"arn:aws:sns:default::topic1\", \"Id\": \"notif1\", \"Events\": [ \"s3:ObjectCreated:*\", \"s3:ObjectRemoved:*\" ], \"Filter\": { \"S3Key\": {}, \"S3Metadata\": {}, \"S3Tags\": {} } }, { \"TopicArn\": \"arn:aws:sns:default::topic1\", \"Id\": \"notif2\", \"Events\": [ \"s3:ObjectSynced:*\" ], \"Filter\": { \"S3Key\": {}, \"S3Metadata\": {}, \"S3Tags\": {} } } ] }", "radosgw-admin notification get --bucket BUCKET_NAME --notification-id NOTIFICATION_ID", "radosgw-admin notification get --bucket bkt2 --notification-id notif2 { \"TopicArn\": \"arn:aws:sns:default::topic1\", \"Id\": \"notif2\", \"Events\": [ \"s3:ObjectSynced:*\" ], \"Filter\": { \"S3Key\": {}, \"S3Metadata\": {}, \"S3Tags\": {} } }", "radosgw-admin notification rm --bucket BUCKET_NAME [--notification-id NOTIFICATION_ID ]", "radosgw-admin notification rm --bucket bkt2 --notification-id notif1", "{\"Records\":[ { \"eventVersion\":\"2.1\", \"eventSource\":\"ceph:s3\", \"awsRegion\":\"us-east-1\", \"eventTime\":\"2019-11-22T13:47:35.124724Z\", \"eventName\":\"ObjectCreated:Put\", \"userIdentity\":{ \"principalId\":\"tester\" }, \"requestParameters\":{ \"sourceIPAddress\":\"\" }, \"responseElements\":{ \"x-amz-request-id\":\"503a4c37-85eb-47cd-8681-2817e80b4281.5330.903595\", \"x-amz-id-2\":\"14d2-zone1-zonegroup1\" }, \"s3\":{ \"s3SchemaVersion\":\"1.0\", \"configurationId\":\"mynotif1\", \"bucket\":{ \"name\":\"mybucket1\", \"ownerIdentity\":{ \"principalId\":\"tester\" }, \"arn\":\"arn:aws:s3:us-east-1::mybucket1\", \"id\":\"503a4c37-85eb-47cd-8681-2817e80b4281.5332.38\" }, \"object\":{ \"key\":\"myimage1.jpg\", \"size\":\"1024\", \"eTag\":\"37b51d194a7513e45b56f6524f2d51f2\", \"versionId\":\"\", \"sequencer\": \"F7E6D75DC742D108\", \"metadata\":[], \"tags\":[] } }, \"eventId\":\"\", \"opaqueData\":\"[email protected]\" } ]}", "`buckets=read`", "GET /admin/bucket?format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME", "GET /admin/bucket?index&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME", "`buckets=write`", "DELETE /admin/bucket?format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME", "`buckets=write`", "PUT /admin/bucket?format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME", "`buckets=write`", "POST /admin/bucket?format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME", "`buckets=read`", "GET /admin/bucket?policy&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME", "`buckets=write`", "DELETE /admin/bucket?object&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME", "GET /admin/user?quota&uid= UID &quota-type=user", "PUT /admin/user?quota&uid= UID &quota-type=user", "`buckets=read`", "GET /admin/bucket?format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME", "PUT /admin/user?quota&uid= UID &quota-type=bucket", "`usage=read`", "GET /admin/usage?format=json HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME", "`usage=write`", "DELETE /admin/usage?format=json HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/developer_guide/ceph-object-gateway-administrative-api
Chapter 11. Configuring alert notifications
Chapter 11. Configuring alert notifications In OpenShift Container Platform, an alert is fired when the conditions defined in an alerting rule are true. An alert provides a notification that a set of circumstances are apparent within a cluster. Firing alerts can be viewed in the Alerting UI in the OpenShift Container Platform web console by default. After an installation, you can configure OpenShift Container Platform to send alert notifications to external systems. 11.1. Sending notifications to external systems In OpenShift Container Platform 4.18, firing alerts can be viewed in the Alerting UI. Alerts are not configured by default to be sent to any notification systems. You can configure OpenShift Container Platform to send alerts to the following receiver types: PagerDuty Webhook Email Slack Microsoft Teams Routing alerts to receivers enables you to send timely notifications to the appropriate teams when failures occur. For example, critical alerts require immediate attention and are typically paged to an individual or a critical response team. Alerts that provide non-critical warning notifications might instead be routed to a ticketing system for non-immediate review. Checking that alerting is operational by using the watchdog alert OpenShift Container Platform monitoring includes a watchdog alert that fires continuously. Alertmanager repeatedly sends watchdog alert notifications to configured notification providers. The provider is usually configured to notify an administrator when it stops receiving the watchdog alert. This mechanism helps you quickly identify any communication issues between Alertmanager and the notification provider. 11.2. Additional resources About OpenShift Container Platform monitoring Configuring alert notifications
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/postinstallation_configuration/configuring-alert-notifications
Deploying OpenShift Data Foundation using IBM Cloud
Deploying OpenShift Data Foundation using IBM Cloud Red Hat OpenShift Data Foundation 4.14 Instructions on deploying Red Hat OpenShift Data Foundation using IBM Cloud Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on IBM cloud clusters.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_using_ibm_cloud/index
Chapter 4. Configuring multi-architecture compute machines on an OpenShift Container Platform cluster
Chapter 4. Configuring multi-architecture compute machines on an OpenShift Container Platform cluster An OpenShift Container Platform cluster with multi-architecture compute machines is a cluster that supports compute machines with different architectures. You can deploy a cluster with multi-architecture compute machines by creating an Azure installer-provisioned cluster using the multi-architecture installer binary. For Azure installation, see Installing a cluster on Azure with customizations . Warning The multi-architecture compute machines Technology Preview feature has limited usability with installing, upgrading, and running payloads. The following procedures explain how to generate an ARM64 boot image and create an Azure compute machine set with the ARM64 boot image. This adds ARM64 compute nodes to your cluster and deploys the desired amount of ARM64 virtual machines (VM). This section also shows how to upgrade your existing cluster to a cluster that supports multi-architecture compute machines. Clusters with multi-architecture compute machines are only available on Azure installer-provisioned infrastructures with x86_64 control plane machines. Important OpenShift Container Platform clusters with multi-architecture compute machines on Azure installer-provisioned infrastructure installations is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 4.1. Creating an ARM64 boot image using the Azure image gallery To configure your cluster with multi-architecture compute machines, you must create an ARM64 boot image and add it to your Azure compute machine set. The following procedure describes how to manually generate an ARM64 boot image. Prerequisites You installed the Azure CLI ( az ). You created a single-architecture Azure installer-provisioned cluster with the multi-architecture installer binary. Procedure Log in to your Azure account: USD az login Create a storage account and upload the ARM64 virtual hard disk (VHD) to your storage account. The OpenShift Container Platform installation program creates a resource group, however, the boot image can also be uploaded to a custom named resource group: USD az storage account create -n USD{STORAGE_ACCOUNT_NAME} -g USD{RESOURCE_GROUP} -l westus --sku Standard_LRS 1 1 The westus object is an example region. Create a storage container using the storage account you generated: USD az storage container create -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} You must use the OpenShift Container Platform installation program JSON file to extract the URL and aarch64 VHD name: Extract the URL field and set it to RHCOS_VHD_ORIGIN_URL as the file name by running the following command: USD RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".url') Extract the aarch64 VHD name and set it to BLOB_NAME as the file name by running the following command: USD BLOB_NAME=rhcos-USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".release')-azure.aarch64.vhd Generate a shared access signature (SAS) token. Use this token to upload the RHCOS VHD to your storage container with the following commands: USD end=`date -u -d "30 minutes" '+%Y-%m-%dT%H:%MZ'` USD sas=`az storage container generate-sas -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} --https-only --permissions dlrw --expiry USDend -o tsv` Copy the RHCOS VHD into the storage container: USD az storage blob copy start --account-name USD{STORAGE_ACCOUNT_NAME} --sas-token "USDsas" \ --source-uri "USD{RHCOS_VHD_ORIGIN_URL}" \ --destination-blob "USD{BLOB_NAME}" --destination-container USD{CONTAINER_NAME} You can check the status of the copying process with the following command: USD az storage blob show -c USD{CONTAINER_NAME} -n USD{BLOB_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} | jq .properties.copy Example output { "completionTime": null, "destinationSnapshot": null, "id": "1fd97630-03ca-489a-8c4e-cfe839c9627d", "incrementalCopy": null, "progress": "17179869696/17179869696", "source": "https://rhcos.blob.core.windows.net/imagebucket/rhcos-411.86.202207130959-0-azure.aarch64.vhd", "status": "success", 1 "statusDescription": null } 1 If the status parameter displays the success object, the copying process is complete. Create an image gallery using the following command: USD az sig create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} Use the image gallery to create an image definition. In the following example command, rhcos-arm64 is the name of the image definition. USD az sig image-definition create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --publisher RedHat --offer arm --sku arm64 --os-type linux --architecture Arm64 --hyper-v-generation V2 To get the URL of the VHD and set it to RHCOS_VHD_URL as the file name, run the following command: USD RHCOS_VHD_URL=USD(az storage blob url --account-name USD{STORAGE_ACCOUNT_NAME} -c USD{CONTAINER_NAME} -n "USD{BLOB_NAME}" -o tsv) Use the RHCOS_VHD_URL file, your storage account, resource group, and image gallery to create an image version. In the following example, 1.0.0 is the image version. USD az sig image-version create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account USD{STORAGE_ACCOUNT_NAME} --os-vhd-uri USD{RHCOS_VHD_URL} Your ARM64 boot image is now generated. You can access the ID of your image with the following command: USD az sig image-version show -r USDGALLERY_NAME -g USDRESOURCE_GROUP -i rhcos-arm64 -e 1.0.0 The following example image ID is used in the recourseID parameter of the machine set: Example resourceID /resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0 4.2. Adding a multi-architecture compute machine set to your cluster using the ARM64 boot image To add ARM64 compute nodes to your cluster, you must create an Azure compute machine set that uses the ARM64 boot image. To create your own custom compute machine set on Azure, see "Creating a compute machine set on Azure". Prerequisites You installed the OpenShift CLI ( oc ). Procedure Create a machine set and modify the resourceID and vmSize parameters with the following command. This machine set will control the ARM64 worker nodes in your cluster: USD oc create -f arm64-machine-set-0.yaml Sample YAML machine set with ARM64 boot image apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: <infrastructure_id>-arm64-machine-set-0 namespace: openshift-machine-api spec: replicas: 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-arm64-machine-set-0 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-arm64-machine-set-0 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: "" publisher: "" resourceID: /resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0 1 sku: "" version: "" kind: AzureMachineProviderSpec location: <region> managedIdentity: <infrastructure_id>-identity networkResourceGroup: <infrastructure_id>-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: <infrastructure_id> resourceGroup: <infrastructure_id>-rg subnet: <infrastructure_id>-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4ps_v5 2 vnet: <infrastructure_id>-vnet zone: "<zone>" 1 Set the resourceID parameter to the arm64 boot image. 2 Set the vmSize parameter to the instance type used in your installation. Some example instance types are Standard_D4ps_v5 or D8ps . Verification Verify that the new ARM64 machines are running by entering the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-arm64-machine-set-0 2 2 2 2 10m You can check that the nodes are ready and scheduable with the following command: USD oc get nodes Additional resources Creating a compute machine set on Azure 4.3. Upgrading a cluster with multi-architecture compute machines You must perform an explicit upgrade command to upgrade your existing cluster to a cluster that supports multi-architecture compute machines. Prerequisites You installed the OpenShift CLI ( oc ). Procedure To manually upgrade your cluster, use the following command: USD oc adm upgrade --allow-explicit-upgrade --to-image <image-pullspec> 1 1 You can access the image-pullspec object from the mixed-arch mirrors page in the release.txt file.
[ "az login", "az storage account create -n USD{STORAGE_ACCOUNT_NAME} -g USD{RESOURCE_GROUP} -l westus --sku Standard_LRS 1", "az storage container create -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME}", "RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64.\"rhel-coreos-extensions\".\"azure-disk\".url')", "BLOB_NAME=rhcos-USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64.\"rhel-coreos-extensions\".\"azure-disk\".release')-azure.aarch64.vhd", "end=`date -u -d \"30 minutes\" '+%Y-%m-%dT%H:%MZ'`", "sas=`az storage container generate-sas -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} --https-only --permissions dlrw --expiry USDend -o tsv`", "az storage blob copy start --account-name USD{STORAGE_ACCOUNT_NAME} --sas-token \"USDsas\" --source-uri \"USD{RHCOS_VHD_ORIGIN_URL}\" --destination-blob \"USD{BLOB_NAME}\" --destination-container USD{CONTAINER_NAME}", "az storage blob show -c USD{CONTAINER_NAME} -n USD{BLOB_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} | jq .properties.copy", "{ \"completionTime\": null, \"destinationSnapshot\": null, \"id\": \"1fd97630-03ca-489a-8c4e-cfe839c9627d\", \"incrementalCopy\": null, \"progress\": \"17179869696/17179869696\", \"source\": \"https://rhcos.blob.core.windows.net/imagebucket/rhcos-411.86.202207130959-0-azure.aarch64.vhd\", \"status\": \"success\", 1 \"statusDescription\": null }", "az sig create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME}", "az sig image-definition create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --publisher RedHat --offer arm --sku arm64 --os-type linux --architecture Arm64 --hyper-v-generation V2", "RHCOS_VHD_URL=USD(az storage blob url --account-name USD{STORAGE_ACCOUNT_NAME} -c USD{CONTAINER_NAME} -n \"USD{BLOB_NAME}\" -o tsv)", "az sig image-version create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account USD{STORAGE_ACCOUNT_NAME} --os-vhd-uri USD{RHCOS_VHD_URL}", "az sig image-version show -r USDGALLERY_NAME -g USDRESOURCE_GROUP -i rhcos-arm64 -e 1.0.0", "/resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0", "oc create -f arm64-machine-set-0.yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: <infrastructure_id>-arm64-machine-set-0 namespace: openshift-machine-api spec: replicas: 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-arm64-machine-set-0 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-arm64-machine-set-0 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0 1 sku: \"\" version: \"\" kind: AzureMachineProviderSpec location: <region> managedIdentity: <infrastructure_id>-identity networkResourceGroup: <infrastructure_id>-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: <infrastructure_id> resourceGroup: <infrastructure_id>-rg subnet: <infrastructure_id>-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4ps_v5 2 vnet: <infrastructure_id>-vnet zone: \"<zone>\"", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-arm64-machine-set-0 2 2 2 2 10m", "oc get nodes", "oc adm upgrade --allow-explicit-upgrade --to-image <image-pullspec> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/post-installation_configuration/post-install-multi-architecture-configuration
15.6. Migrating with virt-manager
15.6. Migrating with virt-manager This section covers migrating a KVM guest virtual machine with virt-manager from one host physical machine to another. Connect to the target host physical machine In the virt-manager interface , connect to the target host physical machine by selecting the File menu, then click Add Connection . Add connection The Add Connection window appears. Figure 15.1. Adding a connection to the target host physical machine Enter the following details: Hypervisor : Select QEMU/KVM . Method : Select the connection method. Username : Enter the user name for the remote host physical machine. Hostname : Enter the host name for the remote host physical machine. Note For more information on the connection options, see Section 19.5, "Adding a Remote Connection" . Click Connect . An SSH connection is used in this example, so the specified user's password must be entered in the step. Figure 15.2. Enter password Configure shared storage Ensure that both the source and the target host are sharing storage, for example using NFS . Migrate guest virtual machines Right-click the guest that is to be migrated, and click Migrate . In the New Host field, use the drop-down list to select the host physical machine you wish to migrate the guest virtual machine to and click Migrate . Figure 15.3. Choosing the destination host physical machine and starting the migration process A progress window appears. Figure 15.4. Progress window If the migration finishes without any problems, virt-manager displays the newly migrated guest virtual machine running in the destination host. Figure 15.5. Migrated guest virtual machine running in the destination host physical machine
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-kvm_live_migration-migrating_with_virt_manager
Installation Guide
Installation Guide Red Hat CodeReady Workspaces 2.1 Installing Red Hat CodeReady Workspaces 2.1 Supriya Takkhi Robert Kratky [email protected] Michal Maler [email protected] Fabrice Flore-Thebault [email protected] Yana Hontyk [email protected] Red Hat Developer Group Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.1/html/installation_guide/index
Chapter 11. Networking Tapset
Chapter 11. Networking Tapset This family of probe points is used to probe the activities of the network device and protocol layers.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/networking.stp
2.11. SystemTap
2.11. SystemTap SystemTap is a tracing and probing tool that lets you monitor and analyze operating system activities, especially kernel activities, in fine detail. It provides information similar to the output of tools like top, ps, netstat, and iostat, but includes additional options for filtering and analyzing collected data. SystemTap provides a deeper, more precise analysis of system activities and application behavior to allow you to pinpoint system and application bottlenecks. For more detailed information about SystemTap, see the Red Hat Enterprise Linux 7 SystemTap Beginners Guide and the Red Hat Enterprise Linux 7 SystemTap Tapset Reference .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-performance_monitoring_tools-systemtap
Chapter 1. Planning your migration of the ML2 mechanism driver from OVS to OVN
Chapter 1. Planning your migration of the ML2 mechanism driver from OVS to OVN Red Hat chose ML2/OVN as the default mechanism driver for all new deployments starting with RHOSP 15.0 because it offers immediate advantages over the ML2/OVS mechanism driver for most customers today. Those advantages multiply with each release while we continue to enhance and improve the ML2/OVN feature set. The ML2/OVS mechanism driver is deprecated in RHOSP 17.0. Over several releases, Red Hat is replacing ML2/OVS with ML2/OVN. Support is available for the deprecated ML2/OVS mechanism driver through the RHOSP 17 releases. During this time, the ML2/OVS driver remains in maintenance mode, receiving bug fixes and normal support, and most new feature development happens in the ML2/OVN mechanism driver. In RHOSP 18.0, Red Hat plans to completely remove the ML2/OVS mechanism driver and stop supporting it. If your existing Red Hat OpenStack Platform (RHOSP) deployment uses the ML2/OVS mechanism driver, start now to evaluate the benefits and feasibility of replacing the ML2/OVS mechanism driver with the ML2/OVN mechanism driver. Migration is supported in RHOSP 16.2 and will be supported in RHOSP 17.1. Migration tools are available in RHOSP 17.0 for test purposes only. Note Red Hat requires that you file a proactive support case before attempting a migration from ML2/OVS to ML2/OVN. Red Hat does not support migrations without the proactive support case. See How to submit a Proactive Case . Engage your Red Hat Technical Account Manager or Red Hat Global Professional Services early in this evaluation. In addition to helping you file the required proactive support case if you decide to migrate, Red Hat can help you plan and prepare, starting with the following basic questions. When should you migrate? Timing depends on many factors, including your business needs and the status of our continuing improvements to the ML2/OVN offering. For instance, security groups logging is planned for a future RHOSP release. If you need that feature, you might plan for a migration after the feature is available. See Limitations of the ML2/OVN mechanism driver and ML2/OVS to ML2/OVN in-place migration: validated and prohibited scenarios . In-place migration or parallel migration? Depending on a variety of factors, you can choose between the following basic approaches to migration. Parallel migration. Create a new, parallel deployment that uses ML2/OVN and then move your operations to that deployment. In-place migration. Use the ovn_migration.sh script as described in this document. Note that Red Hat supports the ovn_migration.sh script only in deployments that are managed by RHOSP director. Warning An ML2/OVS to ML2/OVN migration alters the environment in ways that might not be completely reversible. A failed or interrupted migration can leave the OpenStack environment inoperable. Before migrating in a production environment, file a proactive support case. Then work with your Red Hat Technical Account Manager or Red Hat Global Professional Services to create a backup and migration plan and test the migration in a stage environment that closely resembles your production environment. 1.1. Limitations of the ML2/OVN mechanism driver Some features available with the ML2/OVS mechanism driver are not yet supported with the ML2/OVN mechanism driver. 1.1.1. ML2/OVS features not yet supported by ML2/OVN Feature Notes Track this Feature Provisioning Baremetal Machines with OVN DHCP The built-in DHCP server on OVN presently can not provision baremetal nodes. It cannot serve DHCP for the provisioning networks. Chainbooting iPXE requires tagging ( --dhcp-match in dnsmasq), which is not supported in the OVN DHCP server. https://bugzilla.redhat.com/show_bug.cgi?id=1622154 1.1.2. Core OVN limitations North/south routing on VF(direct) ports on VLAN tenant networks does not work with SR-IOV because the external ports are not colocated with the logical router's gateway ports. See https://bugs.launchpad.net/neutron/+bug/1875852 . 1.2. ML2/OVS to ML2/OVN in-place migration: validated and prohibited scenarios Red Hat continues to test and refine in-place migration scenarios. Work with your Red Hat Technical Account Manager or Global Professional Services to determine whether your OVS deployment meets the criteria for a valid in-place migration scenario. 1.2.1. ML2/OVS to ML2/OVN in-place migration scenarios that have not been verified You cannot perform an in-place ML2/OVS to ML2/OVN migration in the following scenarios until Red Hat announces that the underlying issues are resolved. OVS uses trunk ports If your ML2/OVS deployment uses trunk ports, do not perform an ML2/OVS to ML2/OVN migration. The migration does not properly set up the trunked ports in the OVN environment. To track progress on this issue, see https://bugzilla.redhat.com/show_bug.cgi?id=1857652 . DVR with VLAN project (tenant) networks Do not migrate to ML2/OVN with DVR and VLAN project networks. You can migrate to ML2/OVN with centralized routing. To track progress on this issue, see https://bugzilla.redhat.com/show_bug.cgi?id=1766930 . 1.2.2. ML2/OVS to ML2/OVN in-place migration and security group rules Ensure that any custom security group rules in your originating ML2/OVS deployment are compatible with the target ML2/OVN deployment. For example, the default security group includes rules that allow egress to the DHCP server. If you deleted those rules in your ML2/OVS deployment, ML2/OVS automatically adds implicit rules that allow egress to the DHCP server. Those implicit rules are not supported by ML2/OVN, so in your target ML2/OVN environment, DHCP and metadata traffic would not reach the DHCP server and the instance would not boot. In this case, to restore DHCP access, you could add the following rules:
[ "Allow VM to contact dhcp server (ipv4) openstack security group rule create --egress --ethertype IPv4 --protocol udp --dst-port 67 USD{SEC_GROUP_ID} # Allow VM to contact metadata server (ipv4) openstack security group rule create --egress --ethertype IPv4 --protocol tcp --remote-ip 169.254.169.254 USD{SEC_GROUP_ID} # Allow VM to contact dhcp server (ipv6, non-slaac). Be aware that the remote-ip may vary depending on your use case! openstack security group rule create --egress --ethertype IPv6 --protocol udp --dst-port 547 --remote-ip ff02::1:2 USD{SEC_GROUP_ID} # Allow VM to contact metadata server (ipv6) openstack security group rule create --egress --ethertype IPv6 --protocol tcp --remote-ip fe80::a9fe:a9fe USD{SEC_GROUP_ID}" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/testing_migration_of_the_networking_service_to_the_ml2ovn_mechanism_driver/planning-your-migration-ovs-to-ovn
Chapter 8. Direct Migration Requirements
Chapter 8. Direct Migration Requirements Direct Migration is available with Migration Toolkit for Containers (MTC) 1.4.0 or later. There are two parts of the Direct Migration: Direct Volume Migration Direct Image Migration Direct Migration enables the migration of persistent volumes and internal images directly from the source cluster to the destination cluster without an intermediary replication repository (object storage). 8.1. Prerequisites Expose the internal registries for both clusters (source and destination) involved in the migration for external traffic. Ensure the remote source and destination clusters can communicate using OpenShift Container Platform routes on port 443. Configure the exposed registry route in the source and destination MTC clusters; do this by specifying the spec.exposedRegistryPath field or from the MTC UI. Note If the destination cluster is the same as the host cluster (where a migration controller exists), there is no need to configure the exposed registry route for that particular MTC cluster. The spec.exposedRegistryPath is required only for Direct Image Migration and not Direct Volume Migration. Ensure the two spec flags in MigPlan custom resource (CR) indirectImageMigration and indirectVolumeMigration are set to false for Direct Migration to be performed. The default value for these flags is false . The Direct Migration feature of MTC uses the Rsync utility. 8.2. Rsync configuration for direct volume migration Direct Volume Migration (DVM) in MTC uses Rsync to synchronize files between the source and the target persistent volumes (PVs), using a direct connection between the two PVs. Rsync is a command-line tool that allows you to transfer files and directories to local and remote destinations. The rsync command used by DVM is optimized for clusters functioning as expected. The MigrationController CR exposes the following variables to configure rsync_options in Direct Volume Migration: Variable Type Default value Description rsync_opt_bwlimit int Not set When set to a positive integer, --bwlimit=<int> option is added to Rsync command. rsync_opt_archive bool true Sets the --archive option in the Rsync command. rsync_opt_partial bool true Sets the --partial option in the Rsync command. rsync_opt_delete bool true Sets the --delete option in the Rsync command. rsync_opt_hardlinks bool true Sets the --hard-links option is the Rsync command. rsync_opt_info string COPY2 DEL2 REMOVE2 SKIP2 FLIST2 PROGRESS2 STATS2 Enables detailed logging in Rsync Pod. rsync_opt_extras string Empty Reserved for any other arbitrary options. Setting the options set through the variables above are global for all migrations. The configuration will take effect for all future migrations as soon as the Operator successfully reconciles the MigrationController CR. Any ongoing migration can use the updated settings depending on which step it currently is in. Therefore, it is recommended that the settings be applied before running a migration. The users can always update the settings as needed. Use the rsync_opt_extras variable with caution. Any options passed using this variable are appended to the rsync command, with addition. Ensure you add white spaces when specifying more than one option. Any error in specifying options can lead to a failed migration. However, you can update MigrationController CR as many times as you require for future migrations. Customizing the rsync_opt_info flag can adversely affect the progress reporting capabilities in MTC. However, removing progress reporting can have a performance advantage. This option should only be used when the performance of Rsync operation is observed to be unacceptable. Note The default configuration used by DVM is tested in various environments. It is acceptable for most production use cases provided the clusters are healthy and performing well. These configuration variables should be used in case the default settings do not work and the Rsync operation fails. 8.2.1. Resource limit configurations for Rsync pods The MigrationController CR exposes following variables to configure resource usage requirements and limits on Rsync: Variable Type Default Description source_rsync_pod_cpu_limits string 1 Source rsync pod's CPU limit source_rsync_pod_memory_limits string 1Gi Source rsync pod's memory limit source_rsync_pod_cpu_requests string 400m Source rsync pod's cpu requests source_rsync_pod_memory_requests string 1Gi Source rsync pod's memory requests target_rsync_pod_cpu_limits string 1 Target rsync pod's cpu limit target_rsync_pod_cpu_requests string 400m Target rsync pod's cpu requests target_rsync_pod_memory_limits string 1Gi Target rsync pod's memory limit target_rsync_pod_memory_requests string 1Gi Target rsync pod's memory requests 8.2.1.1. Supplemental group configuration for Rsync pods If Persistent Volume Claims (PVC) are using a shared storage, the access to storage can be configured by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Variable Type Default Description src_supplemental_groups string Not Set Comma separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not Set Comma separated list of supplemental groups for target Rsync Pods For example, the MigrationController CR can be updated to set the values: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 8.2.1.2. Rsync retry configuration With Migration Toolkit for Containers (MTC) 1.4.3 and later, a new ability of retrying a failed Rsync operation is introduced. By default, the migration controller retries Rsync until all of the data is successfully transferred from the source to the target volume or a specified number of retries is met. The default retry limit is set to 20 . For larger volumes, a limit of 20 retries may not be sufficient. You can increase the retry limit by using the following variable in the MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_backoff_limit: 40 In this example, the retry limit is increased to 40 . 8.2.1.3. Running Rsync as either root or non-root OpenShift Container Platform environments have the PodSecurityAdmission controller enabled by default. This controller requires cluster administrators to enforce Pod Security Standards by means of namespace labels. All workloads in the cluster are expected to run one of the following Pod Security Standard levels: Privileged , Baseline or Restricted . Every cluster has its own default policy set. To guarantee successful data transfer in all environments, Migration Toolkit for Containers (MTC) 1.7.5 introduced changes in Rsync pods, including running Rsync pods as non-root user by default. This ensures that data transfer is possible even for workloads that do not necessarily require higher privileges. This change was made because it is best to run workloads with the lowest level of privileges possible. 8.2.1.3.1. Manually overriding default non-root operation for data transfer Although running Rsync pods as non-root user works in most cases, data transfer might fail when you run workloads as root user on the source side. MTC provides two ways to manually override default non-root operation for data transfer: Configure all migrations to run an Rsync pod as root on the destination cluster for all migrations. Run an Rsync pod as root on the destination cluster per migration. In both cases, you must set the following labels on the source side of any namespaces that are running workloads with higher privileges before migration: enforce , audit , and warn. To learn more about Pod Security Admission and setting values for labels, see Controlling pod security admission synchronization . 8.2.1.3.2. Configuring the MigrationController CR as root or non-root for all migrations By default, Rsync runs as non-root. On the destination cluster, you can configure the MigrationController CR to run Rsync as root. Procedure Configure the MigrationController CR as follows: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true This configuration will apply to all future migrations. 8.2.1.3.3. Configuring the MigMigration CR as root or non-root per migration On the destination cluster, you can configure the MigMigration CR to run Rsync as root or non-root, with the following non-root options: As a specific user ID (UID) As a specific group ID (GID) Procedure To run Rsync as root, configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true To run Rsync as a specific User ID (UID) or as a specific Group ID (GID), configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3 8.2.2. MigCluster Configuration For every MigCluster resource created in Migration Toolkit for Containers (MTC), a ConfigMap named migration-cluster-config is created in the Migration Operator's namespace on the cluster which MigCluster resource represents. The migration-cluster-config allows you to configure MigCluster specific values. The Migration Operator manages the migration-cluster-config . You can configure every value in the ConfigMap using the variables exposed in the MigrationController CR: Variable Type Required Description migration_stage_image_fqin string No Image to use for Stage Pods (applicable only to IndirectVolumeMigration) migration_registry_image_fqin string No Image to use for Migration Registry rsync_endpoint_type string No Type of endpoint for data transfer ( Route , ClusterIP , NodePort ) rsync_transfer_image_fqin string No Image to use for Rsync Pods (applicable only to DirectVolumeMigration) migration_rsync_privileged bool No Whether to run Rsync Pods as privileged or not migration_rsync_super_privileged bool No Whether to run Rsync Pods as super privileged containers ( spc_t SELinux context) or not cluster_subdomain string No Cluster's subdomain migration_registry_readiness_timeout int No Readiness timeout (in seconds) for Migration Registry Deployment migration_registry_liveness_timeout int No Liveness timeout (in seconds) for Migration Registry Deployment exposed_registry_validation_path string No Subpath to validate exposed registry in a MigCluster (for example /v2) 8.3. Direct migration known issues 8.3.1. Applying the Skip SELinux relabel workaround with spc_t automatically on workloads running on OpenShift Container Platform When attempting to migrate a namespace with Migration Toolkit for Containers (MTC) and a substantial volume associated with it, the rsync-server may become frozen without any further information to troubleshoot the issue. 8.3.1.1. Diagnosing the need for the Skip SELinux relabel workaround Search for an error of Unable to attach or mount volumes for pod... timed out waiting for the condition in the kubelet logs from the node where the rsync-server for the Direct Volume Migration (DVM) runs. Example kubelet log kubenswrapper[3879]: W0326 16:30:36.749224 3879 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/8905d88e-6531-4d65-9c2a-eff11dc7eb29/volumes/kubernetes.io~csi/pvc-287d1988-3fd9-4517-a0c7-22539acd31e6/mount and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699 kubenswrapper[3879]: E0326 16:32:02.706363 3879 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition" pod="caboodle-preprod/rsync-server" kubenswrapper[3879]: E0326 16:32:02.706496 3879 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition" pod="caboodle-preprod/rsync-server" podUID=8905d88e-6531-4d65-9c2a-eff11dc7eb29 8.3.1.2. Resolving using the Skip SELinux relabel workaround To resolve this issue, set the migration_rsync_super_privileged parameter to true in both the source and destination MigClusters using the MigrationController custom resource (CR). Example MigrationController CR apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: migration_rsync_super_privileged: true 1 azure_resource_group: "" cluster_name: host mig_namespace_limit: "10" mig_pod_limit: "100" mig_pv_limit: "100" migration_controller: true migration_log_reader: true migration_ui: true migration_velero: true olm_managed: true restic_timeout: 1h version: 1.8.3 1 The value of the migration_rsync_super_privileged parameter indicates whether or not to run Rsync Pods as super privileged containers ( spc_t selinux context ). Valid settings are true or false .
[ "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_backoff_limit: 40", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3", "kubenswrapper[3879]: W0326 16:30:36.749224 3879 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/8905d88e-6531-4d65-9c2a-eff11dc7eb29/volumes/kubernetes.io~csi/pvc-287d1988-3fd9-4517-a0c7-22539acd31e6/mount and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699 kubenswrapper[3879]: E0326 16:32:02.706363 3879 kubelet.go:1841] \"Unable to attach or mount volumes for pod; skipping pod\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" kubenswrapper[3879]: E0326 16:32:02.706496 3879 pod_workers.go:965] \"Error syncing pod, skipping\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" podUID=8905d88e-6531-4d65-9c2a-eff11dc7eb29", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: migration_rsync_super_privileged: true 1 azure_resource_group: \"\" cluster_name: host mig_namespace_limit: \"10\" mig_pod_limit: \"100\" mig_pv_limit: \"100\" migration_controller: true migration_log_reader: true migration_ui: true migration_velero: true olm_managed: true restic_timeout: 1h version: 1.8.3" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/migration_toolkit_for_containers/mtc-direct-migration-requirements
Chapter 7. Debugging a routing context
Chapter 7. Debugging a routing context This tutorial shows how to use the Camel debugger to find logic errors for a locally running routing context. Goals In this tutorial you complete the following tasks: Set breakpoints on the nodes of interest in the two routes In the Debug perspective, step through the routes and examine the values of message variables Step through the routes again, changing the value of a message variable and observing the effect Prerequisites To start this tutorial, you need the ZooOrderApp project resulting from one of the following: Complete the Chapter 6, Adding another route to the routing context tutorial. or Complete the Chapter 2, Setting up your environment tutorial and replace your project's blueprint.xml file with the provided blueprintContexts/blueprint3.xml file, as described in the section called "About the resource files" . Setting breakpoints In the Debugger, you can set both conditional and unconditional breakpoints. In this tutorial, you only set unconditional breakpoints. To learn how to set conditional breakpoints (that are triggered when a specific condition is met during the debugging session), see the Tooling User Guide . To set unconditional breakpoints: If necessary, open your ZooOrderApp/src/main/resources/OSGI-INF/blueprint/blueprint.xml in the route editor. In Project Explorer , expand Camel Contexts src/main/resources/OSGI-INF/blueprint/blueprint.xml to expose both route entries. Double-click the Route_route1 entry to switch focus to Route_route1 in the Design tab. On the canvas, select the Choice_choice1 node, and then click its icon to set an unconditional breakpoint: Note In the route editor, you can disable or delete a specific breakpoint by clicking the node's icon or its icon, respectively. You can delete all set breakpoints by right-clicking the canvas and selecting Delete all breakpoints . Set unconditional breakpoints on the following Route_Route1 nodes: Log_log1 SetHeader_setHeader1 To_Invalid Log_log2 SetHeader_setHeader2 To_Fulfill In Project Explorer , double-click Route_route2 under src/main/resources/OSGI-INF/blueprint to open Route_route2 on the canvas. Set unconditional breakpoints on the following Route_Route2 nodes: Choice_choice2 SetHeader_setHead_usa Log_usa To_US SetHeader_setHead_ger Log_ger To_GER Stepping through the routing context You can step through the routing context in two ways: Step over ( ) - Jumps to the node of execution in the routing context, regardless of breakpoints. Resume ( ) - Jumps to the active breakpoint in the routing context. In Project Explorer , expand the ZooOrderApp project's Camel Contexts folder to expose the blueprint.xml file. Right-click the blueprint.xml file to open its context menu, and then click Debug As Local Camel Context (without tests) . The Camel debugger suspends execution at the first breakpoint it encounters and asks whether you want to open the Debug perspective now: Click Yes . Note If you click No , the confirmation pane appears several more times. After the third refusal, it disappears, and the Camel debugger resumes execution. To interact with the debugger at this point, you need to open the Debug perspective by clicking Window Open Perspective > Debug . The Debug perspective opens with the routing context suspended at _choice1 in _route1 [blueprint.xml] as shown in the Debug view: Note Breakpoints are held for a maximum of five minutes before the debugger automatically resumes, moving on to the breakpoint or to the end of the routing context, whichever comes . In the Variables view, expand the nodes to expose the variables and values available for each node. As you step through the routing context, the variables whose values have changed since the last breakpoint are highlighted in yellow. You might need to expand the nodes at each breakpoint to reveal variables that have changed. Click to step to the breakpoint, _log2 in _route1 [blueprint.xml] : Expand the nodes in the Variables view to examine the variables that have changed since the last breakpoint at _choice1 in Route1 [blueprintxt.xml] . Click to step to the breakpoint, _setHeader2 in Route1 [blueprint.xml] . Examine the variables that changed (highlighted in yellow) since the breakpoint at _log2 in Route1 [blueprint.xml] . In the Debug view, click _log2 in _route1 [blueprint.xml] to populate the Variables view with the variable values from the breakpoint _log2 in _route1 [blueprint.xml] for a quick comparison. In the Debug view, you can switch between breakpoints within the same message flow to quickly compare and monitor changing variable values in the Variables view. Note Message flows can vary in length. For messages that transit the InvalidOrders branch of Route_route1 , the message flow is short. For messages that transit the ValidOrders branch of Route_route1 , which continues on to Route_route2 , the message flow is longer. Continue stepping through the routing context. When one message completes the routing context and the message enters it, the new message flow appears in the Debug view, tagged with a new breadcrumb ID: In this case, ID-janemurpheysmbp-home-55846-1471374645179-0-3 identifies the second message flow, corresponding to message2.xml having entered the routing context. Breadcrumb IDs are incremented by 2. Note Exchange and Message IDs are identical and remain unchanged throughout a message's passage through the routing context. Their IDs are constructed from the message flow's breadcrumb ID, and incremented by 1. So, in the case of message2.xml , its ExchangeId and MessageId are ID-janemurpheysmbp-home-55846-1471374645179-0-4 . When message3.xml enters the breakpoint _choice1 in _route_route1 [blueprint.xml] , examine the Processor variables. The values displayed are the metrics accumulated for message1.xml and message2.xml , which previously transited the routing context: Timing metrics are in milliseconds. Continue stepping each message through the routing context, examining variables and console output at each processing step. When message6.xml enters the breakpoint To_GER in Route2 [blueprint.xml] , the debugger begins shutting down the breadcrumb threads. In the Menu bar, click to terminate the Camel debugger. The Console terminates, but you must manually clear the output. Note With a thread or endpoint selected under the Camel Context node in the Debug view, you must click twice - first to terminate the thread or endpoint and second to terminate the Camel Context, thus the session. In the Menu bar, right-click to open the context menu, and then select Close to close Debug perspective. CodeReady Studio automatically returns to the perspective from which you launched the Camel debugger. In Project Explorer , right-click the project and then select Refresh to refresh the display. Note If you terminated the session prematurely, before all messages transited the routing context, you might see, under the ZooOrderApp/src/data folder, a message like this: message3.xml.camelLock . You need to remove it before you run the debugger on the project again. To do so, double-click the .camelLock message to open its context menu, and then select Delete . When asked, click OK to confirm deletion. Expand the ZooOrderApp/target/messages/ directories to check that the messages were delivered to their expected destinations: Leave the routing context as is, with all breakpoints set and enabled. Changing the value of a variable In this section, you add variables to a watch list to easily check how their values change as messages pass through the routing context. You change the value of a variable in the body of a message and then observe how the change affects the message's route through the routing context. To rerun the Camel debugger on the ZooOrderApp project, right-click the blueprint.xml file and then click Debug As Local Camel Context (without tests) . With message1 stopped at the first breakpoint, _choice1 in _route1 [blueprint.xml] , add the variables NodeId and RouteId (in the Exchange category) and MessageBody and CamelFileName (in the Message category) to the watch list. For each of the four variables: In the Variables view, expand the appropriate category to expose the target variable: Right-click the variable (in this case, NodeId in the Exchange category) to open the context menu and select Watch : The Expressions tab opens, listing the variable you selected to watch: Note Creating a watch list makes it easy for you to quickly check the current value of multiple variables of interest. Step message1 through the routing context until it reaches the fourth breakpoint, _Fulfill in _route1 [blueprint.xml] . In the Variables view, expand the Message category. Add the variable Destination to the watch list. The Expressions view should now contain these variables: Note The pane below the list of variables displays the value of the selected variable. The Expressions view retains all variables that you add to the list until you explicitly remove them. Step message1 through the rest of the routing context and then step message2 all of the way through. Stop message3 at _choice1 in _route1 [blueprint.xml] . In the Variables view, expand the Message category to expose the MessageBody variable. Right-click MessageBody to open its context menu, and select Change Value : Change the value of quantity from 15 to 10 (to change it from an invalid order to a valid order): This changes the in-memory value only (it does not edit the message3.xml file). Click OK . Switch to the Expressions view, and select the MessageBody variable. The pane below the list of variables displays the entire body of message3 , making it easy to check the current value of order items: Click to step to the breakpoint. Instead of following the branch leading to To_Invalid , message3 now follows the branch leading to To_Fulfill and Route_route2 . Narrowing the Camel debugger's focus You can temporarily narrow and then re-expand the debugger's focus by disabling and re-enabling breakpoints: Step message4 through the routing context, checking the Debug view, the Variables view, and the Console output at each step. Stop message4 at _choice1 in _route1 [blueprint.xml] . Switch to the Breakpoints view, and clear each check box to the breakpoints listed below _choice1 . Clearing the check box of a breakpoint temporarily disables it. Click to step to the breakpoint: The debugger skips over the disabled breakpoints and jumps to _FulFill in _route1 [blueprint.xml] . Click again to step to the breakpoint: The debugger jumps to _GER in _route2 [blueprint.xml] . Click repeatedly to quickly step message5 and message6 through the routing context. Switch to the Breakpoints view, and check the boxes to all breakpoints to reenable them. Verifying the effect of changing a message variable value To stop the debugger and check the results of changing the value of `message1's quantity variable: In the tool bar, click to terminate the Camel debugger: Click the Console's button to clear the output. Close the Debug perspective and return to the perspective from which you launched the Camel debugger. In Project Explorer , refresh the display. Expand the ZooOrderApp/target/messages/ directories to check whether the messages were delivered as expected: You should see that only message1 was sent to the invalidOrders and that message3.xml appears in the validOrders/Germany folder. steps In the Chapter 8, Tracing a message through a route tutorial, you trace messages through your routing context to determine where you can optimize and fine tune your routing context's performance.
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_tutorials/ridertutorialdebug
9.2. Stable Device Addresses in Red Hat Virtualization
9.2. Stable Device Addresses in Red Hat Virtualization Virtual hardware PCI address allocations are persisted in the ovirt-engine database. PCI addresses are allocated by QEMU at virtual machine creation time, and reported to VDSM by libvirt . VDSM reports them back to the Manager, where they are stored in the ovirt-engine database. When a virtual machine is started, the Manager sends VDSM the device address out of the database. VDSM passes them to libvirt which starts the virtual machine using the PCI device addresses that were allocated when the virtual machine was run for the first time. When a device is removed from a virtual machine, all references to it, including the stable PCI address, are also removed. If a device is added to replace the removed device, it is allocated a PCI address by QEMU , which is unlikely to be the same as the device it replaced.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/stable_device_addresses_in_red_hat_enterprise_virtualization
Index
Index Symbols /boot/ directory, The /boot/ Directory /dev/shm, df Command /etc/fstab, Converting to an ext3 File System , Mounting NFS File Systems Using /etc/fstab , Mounting a File System /etc/fstab file enabling disk quotas with, Enabling Quotas /local/directory (client configuration, mounting) NFS, Configuring NFS Client /proc /proc/devices, The /proc Virtual File System /proc/filesystems, The /proc Virtual File System /proc/mdstat, The /proc Virtual File System /proc/mounts, The /proc Virtual File System /proc/mounts/, The /proc Virtual File System /proc/partitions, The /proc Virtual File System /proc/devices virtual file system (/proc), The /proc Virtual File System /proc/filesystems virtual file system (/proc), The /proc Virtual File System /proc/mdstat virtual file system (/proc), The /proc Virtual File System /proc/mounts virtual file system (/proc), The /proc Virtual File System /proc/mounts/ virtual file system (/proc), The /proc Virtual File System /proc/partitions virtual file system (/proc), The /proc Virtual File System /remote/export (client configuration, mounting) NFS, Configuring NFS Client A adding paths to a storage device, Adding a Storage Device or Path adding/removing LUN (logical unit number), Adding/Removing a Logical Unit Through rescan-scsi-bus.sh advanced RAID device creation RAID, Creating Advanced RAID Devices allocation features ext4, The ext4 File System XFS, The XFS File System Anaconda support RAID, RAID Support in the Anaconda Installer API, Fibre Channel, Fibre Channel API API, iSCSI, iSCSI API ATA standards I/O alignment and size, ATA autofs , autofs , Configuring autofs (see also NFS) autofs version 5 NFS, Improvements in autofs Version 5 over Version 4 B backup/restoration XFS, Backing Up and Restoring XFS File Systems battery-backed write caches write barriers, Battery-Backed Write Caches bcull (cache cull limits settings) FS-Cache, Setting Cache Cull Limits binding/unbinding an iface to a portal offload and interface binding iSCSI, Binding/Unbinding an iface to a Portal block device ioctls (userspace access) I/O alignment and size, Block Device ioctls blocked device, verifying Fibre Channel modifying link loss behavior, Fibre Channel brun (cache cull limits settings) FS-Cache, Setting Cache Cull Limits bstop (cache cull limits settings) FS-Cache, Setting Cache Cull Limits Btrfs File System, Btrfs (Technology Preview) C cache back end FS-Cache, FS-Cache cache cull limits FS-Cache, Setting Cache Cull Limits cache limitations with NFS FS-Cache, Cache Limitations with NFS cache setup FS-Cache, Setting up a Cache cache sharing FS-Cache, Cache Sharing cachefiles FS-Cache, FS-Cache cachefilesd FS-Cache, Setting up a Cache CCW, channel command word storage considerations during installation, DASD and zFCP Devices on IBM System Z changing dev_loss_tmo Fibre Channel modifying link loss behavior, Fibre Channel Changing the read/write state Online logical units, Changing the Read/Write State of an Online Logical Unit channel command word (CCW) storage considerations during installation, DASD and zFCP Devices on IBM System Z coherency data FS-Cache, FS-Cache command timer (SCSI) Linux SCSI layer, Command Timer commands volume_key, volume_key Commands configuration discovery iSCSI, iSCSI Discovery Configuration configuring a tftp service for diskless clients diskless systems, Configuring a tftp Service for Diskless Clients configuring an Ethernet interface to use FCoE FCoE, Configuring a Fibre Channel over Ethernet Interface configuring DHCP for diskless clients diskless systems, Configuring DHCP for Diskless Clients configuring RAID sets RAID, Configuring RAID Sets controlling SCSI command timer and device status Linux SCSI layer, Controlling the SCSI Command Timer and Device Status creating ext4, Creating an ext4 File System XFS, Creating an XFS File System cumulative mode (xfsrestore) XFS, Restoration D DASD and zFCP devices on IBM System z storage considerations during installation, DASD and zFCP Devices on IBM System Z debugfs (other ext4 file system utilities) ext4, Other ext4 File System Utilities deployment solid-state disks, Solid-State Disk Deployment Guidelines deployment guidelines solid-state disks, Solid-State Disk Deployment Guidelines determining remote port states Fibre Channel modifying link loss behavior, Fibre Channel dev directory, The /dev/ Directory device status Linux SCSI layer, Device States device-mapper multipathing, DM Multipath devices, removing, Removing a Storage Device dev_loss_tmo Fibre Channel modifying link loss behavior, Fibre Channel dev_loss_tmo, changing Fibre Channel modifying link loss behavior, Fibre Channel df, df Command DHCP, configuring diskless systems, Configuring DHCP for Diskless Clients DIF/DIX-enabled block devices storage considerations during installation, Block Devices with DIF/DIX Enabled direct map support (autofs version 5) NFS, Improvements in autofs Version 5 over Version 4 directories /boot/, The /boot/ Directory /dev/, The /dev/ Directory /etc/, The /etc/ Directory /mnt/, The /mnt/ Directory /opt/, The /opt/ Directory /proc/, The /proc/ Directory /srv/, The /srv/ Directory /sys/, The /sys/ Directory /usr/, The /usr/ Directory /var/, The /var/ Directory dirty logs (repairing XFS file systems) XFS, Repairing an XFS File System disabling NOP-Outs iSCSI configuration, iSCSI Root disabling write caches write barriers, Disabling Write Caches discovery iSCSI, iSCSI Discovery Configuration disk quotas, Disk Quotas additional resources, Disk Quota References assigning per file system, Setting the Grace Period for Soft Limits assigning per group, Assigning Quotas per Group assigning per user, Assigning Quotas per User disabling, Enabling and Disabling enabling, Configuring Disk Quotas , Enabling and Disabling /etc/fstab, modifying, Enabling Quotas creating quota files, Creating the Quota Database Files quotacheck, running, Creating the Quota Database Files grace period, Assigning Quotas per User hard limit, Assigning Quotas per User management of, Managing Disk Quotas quotacheck command, using to check, Keeping Quotas Accurate reporting, Reporting on Disk Quotas soft limit, Assigning Quotas per User disk storage (see disk quotas) parted (see parted) diskless systems DHCP, configuring, Configuring DHCP for Diskless Clients exported file systems, Configuring an Exported File System for Diskless Clients network booting service, Setting up a Remote Diskless System remote diskless systems, Setting up a Remote Diskless System required packages, Setting up a Remote Diskless System tftp service, configuring, Configuring a tftp Service for Diskless Clients dm-multipath iSCSI configuration, iSCSI Settings with dm-multipath dmraid RAID, dmraid dmraid (configuring RAID sets) RAID, dmraid drivers (native), Fibre Channel, Native Fibre Channel Drivers and Capabilities du, du Command dump levels XFS, Backup E e2fsck, Reverting to an Ext2 File System e2image (other ext4 file system utilities) ext4, Other ext4 File System Utilities e2label ext4, Other ext4 File System Utilities e2label (other ext4 file system utilities) ext4, Other ext4 File System Utilities enablind/disabling write barriers, Enabling and Disabling Write Barriers enhanced LDAP support (autofs version 5) NFS, Improvements in autofs Version 5 over Version 4 error messages write barriers, Enabling and Disabling Write Barriers etc directory, The /etc/ Directory expert mode (xfs_quota) XFS, XFS Quota Management exported file systems diskless systems, Configuring an Exported File System for Diskless Clients ext2 reverting from ext3, Reverting to an Ext2 File System ext3 converting from ext2, Converting to an ext3 File System creating, Creating an ext3 File System features, The ext3 File System ext4 allocation features, The ext4 File System creating, Creating an ext4 File System debugfs (other ext4 file system utilities), Other ext4 File System Utilities e2image (other ext4 file system utilities), Other ext4 File System Utilities e2label, Other ext4 File System Utilities e2label (other ext4 file system utilities), Other ext4 File System Utilities file system types, The ext4 File System fsync(), The ext4 File System main features, The ext4 File System mkfs.ext4, Creating an ext4 File System mounting, Mounting an ext4 File System nobarrier mount option, Mounting an ext4 File System other file system utilities, Other ext4 File System Utilities quota (other ext4 file system utilities), Other ext4 File System Utilities resize2fs (resizing ext4), Resizing an ext4 File System resizing, Resizing an ext4 File System stride (specifying stripe geometry), Creating an ext4 File System stripe geometry, Creating an ext4 File System stripe-width (specifying stripe geometry), Creating an ext4 File System tune2fs (mounting), Mounting an ext4 File System write barriers, Mounting an ext4 File System F FCoE configuring an Ethernet interface to use FCoE, Configuring a Fibre Channel over Ethernet Interface Fibre Channel over Ethernet, Configuring a Fibre Channel over Ethernet Interface required packages, Configuring a Fibre Channel over Ethernet Interface FHS, Overview of Filesystem Hierarchy Standard (FHS) , FHS Organization (see also file system) Fibre Channel online storage, Fibre Channel Fibre Channel API, Fibre Channel API Fibre Channel drivers (native), Native Fibre Channel Drivers and Capabilities Fibre Channel over Ethernet FCoE, Configuring a Fibre Channel over Ethernet Interface file system FHS standard, FHS Organization hierarchy, Overview of Filesystem Hierarchy Standard (FHS) organization, FHS Organization structure, File System Structure and Maintenance File System Btrfs, Btrfs (Technology Preview) file system types ext4, The ext4 File System GFS2, Global File System 2 XFS, The XFS File System file systems, Gathering File System Information ext2 (see ext2) ext3 (see ext3) findmnt (command) listing mounts, Listing Currently Mounted File Systems FS-Cache bcull (cache cull limits settings), Setting Cache Cull Limits brun (cache cull limits settings), Setting Cache Cull Limits bstop (cache cull limits settings), Setting Cache Cull Limits cache back end, FS-Cache cache cull limits, Setting Cache Cull Limits cache sharing, Cache Sharing cachefiles, FS-Cache cachefilesd, Setting up a Cache coherency data, FS-Cache indexing keys, FS-Cache NFS (cache limitations with), Cache Limitations with NFS NFS (using with), Using the Cache with NFS performance guarantee, Performance Guarantee setting up a cache, Setting up a Cache statistical information (tracking), Statistical Information tune2fs (setting up a cache), Setting up a Cache fsync() ext4, The ext4 File System XFS, The XFS File System G GFS2 file system types, Global File System 2 gfs2.ko, Global File System 2 maximum size, Global File System 2 GFS2 file system maximum size, Global File System 2 gfs2.ko GFS2, Global File System 2 Global File System 2 file system types, Global File System 2 gfs2.ko, Global File System 2 maximum size, Global File System 2 gquota/gqnoenforce XFS, XFS Quota Management H Hardware RAID (see RAID) hardware RAID controller drivers RAID, Linux Hardware RAID Controller Drivers hierarchy, file system, Overview of Filesystem Hierarchy Standard (FHS) high-end arrays write barriers, High-End Arrays host Fibre Channel API, Fibre Channel API how write barriers work write barriers, How Write Barriers Work I I/O alignment and size, Storage I/O Alignment and Size ATA standards, ATA block device ioctls (userspace access), Block Device ioctls Linux I/O stack, Storage I/O Alignment and Size logical_block_size, Userspace Access LVM, Logical Volume Manager READ CAPACITY(16), SCSI SCSI standards, SCSI stacking I/O parameters, Stacking I/O Parameters storage access parameters, Parameters for Storage Access sysfs interface (userspace access), sysfs Interface tools (for partitioning and other file system functions), Partition and File System Tools userspace access, Userspace Access I/O parameters stacking I/O alignment and size, Stacking I/O Parameters iface (configuring for iSCSI offload) offload and interface binding iSCSI, Configuring an iface for iSCSI Offload iface binding/unbinding offload and interface binding iSCSI, Binding/Unbinding an iface to a Portal iface configurations, viewing offload and interface binding iSCSI, Viewing Available iface Configurations iface for software iSCSI offload and interface binding iSCSI, Configuring an iface for Software iSCSI iface settings offload and interface binding iSCSI, Viewing Available iface Configurations importance of write barriers write barriers, Importance of Write Barriers increasing file system size XFS, Increasing the Size of an XFS File System indexing keys FS-Cache, FS-Cache individual user volume_key, Using volume_key as an Individual User initiator implementations offload and interface binding iSCSI, Viewing Available iface Configurations installation storage configurations channel command word (CCW), DASD and zFCP Devices on IBM System Z DASD and zFCP devices on IBM System z, DASD and zFCP Devices on IBM System Z DIF/DIX-enabled block devices, Block Devices with DIF/DIX Enabled iSCSI detection and configuration, iSCSI Detection and Configuration LUKS/dm-crypt, encrypting block devices using, Encrypting Block Devices Using LUKS separate partitions (for /home, /opt, /usr/local), Separate Partitions for /home, /opt, /usr/local stale BIOS RAID metadata, Stale BIOS RAID Metadata updates, Storage Considerations During Installation what's new, Storage Considerations During Installation installer support RAID, RAID Support in the Anaconda Installer interactive operation (xfsrestore) XFS, Restoration interconnects (scanning) iSCSI, Scanning iSCSI Interconnects introduction, Overview iSCSI discovery, iSCSI Discovery Configuration configuration, iSCSI Discovery Configuration record types, iSCSI Discovery Configuration offload and interface binding, Configuring iSCSI Offload and Interface Binding binding/unbinding an iface to a portal, Binding/Unbinding an iface to a Portal iface (configuring for iSCSI offload), Configuring an iface for iSCSI Offload iface configurations, viewing, Viewing Available iface Configurations iface for software iSCSI, Configuring an iface for Software iSCSI iface settings, Viewing Available iface Configurations initiator implementations, Viewing Available iface Configurations software iSCSI, Configuring an iface for Software iSCSI viewing available iface configurations, Viewing Available iface Configurations scanning interconnects, Scanning iSCSI Interconnects software iSCSI, Configuring an iface for Software iSCSI targets, Logging in to an iSCSI Target logging in, Logging in to an iSCSI Target iSCSI API, iSCSI API iSCSI detection and configuration storage considerations during installation, iSCSI Detection and Configuration iSCSI logical unit, resizing, Resizing an iSCSI Logical Unit iSCSI root iSCSI configuration, iSCSI Root K known issues adding/removing LUN (logical unit number), Known Issues with rescan-scsi-bus.sh L lazy mount/unmount support (autofs version 5) NFS, Improvements in autofs Version 5 over Version 4 levels RAID, RAID Levels and Linear Support limit (xfs_quota expert mode) XFS, XFS Quota Management linear RAID RAID, RAID Levels and Linear Support Linux I/O stack I/O alignment and size, Storage I/O Alignment and Size logging in iSCSI targets, Logging in to an iSCSI Target logical_block_size I/O alignment and size, Userspace Access LUKS/dm-crypt, encrypting block devices using storage considerations during installation, Encrypting Block Devices Using LUKS LUN (logical unit number) adding/removing, Adding/Removing a Logical Unit Through rescan-scsi-bus.sh known issues, Known Issues with rescan-scsi-bus.sh required packages, Adding/Removing a Logical Unit Through rescan-scsi-bus.sh rescan-scsi-bus.sh, Adding/Removing a Logical Unit Through rescan-scsi-bus.sh LVM I/O alignment and size, Logical Volume Manager M main features ext4, The ext4 File System XFS, The XFS File System maximum size GFS2, Global File System 2 maximum size, GFS2 file system, Global File System 2 mdadm (configuring RAID sets) RAID, mdadm mdraid RAID, mdraid mirroring RAID, RAID Levels and Linear Support mkfs , Formatting and Labeling the Partition mkfs.ext4 ext4, Creating an ext4 File System mkfs.xfs XFS, Creating an XFS File System mnt directory, The /mnt/ Directory modifying link loss behavior, Modifying Link Loss Behavior Fibre Channel, Fibre Channel mount (client configuration) NFS, Configuring NFS Client mount (command), Using the mount Command listing mounts, Listing Currently Mounted File Systems mounting a file system, Mounting a File System moving a mount point, Moving a Mount Point options, Specifying the Mount Options shared subtrees, Sharing Mounts private mount, Sharing Mounts shared mount, Sharing Mounts slave mount, Sharing Mounts unbindable mount, Sharing Mounts mounting, Mounting a File System ext4, Mounting an ext4 File System XFS, Mounting an XFS File System moving a mount point, Moving a Mount Point multiple master map entries per autofs mount point (autofs version 5) NFS, Improvements in autofs Version 5 over Version 4 N native Fibre Channel drivers, Native Fibre Channel Drivers and Capabilities network booting service diskless systems, Setting up a Remote Diskless System Network File System (see NFS) NFS /etc/fstab , Mounting NFS File Systems Using /etc/fstab /local/directory (client configuration, mounting), Configuring NFS Client /remote/export (client configuration, mounting), Configuring NFS Client additional resources, NFS References installed documentation, Installed Documentation related books, Related Books useful websites, Useful Websites autofs augmenting, Overriding or Augmenting Site Configuration Files configuration, Configuring autofs LDAP, Using LDAP to Store Automounter Maps autofs version 5, Improvements in autofs Version 5 over Version 4 client autofs , autofs configuration, Configuring NFS Client mount options, Common NFS Mount Options condrestart, Starting and Stopping the NFS Server configuration with firewall, Running NFS Behind a Firewall direct map support (autofs version 5), Improvements in autofs Version 5 over Version 4 enhanced LDAP support (autofs version 5), Improvements in autofs Version 5 over Version 4 FS-Cache, Using the Cache with NFS hostname formats, Hostname Formats how it works, Introduction to NFS introducing, Network File System (NFS) lazy mount/unmount support (autofs version 5), Improvements in autofs Version 5 over Version 4 mount (client configuration), Configuring NFS Client multiple master map entries per autofs mount point (autofs version 5), Improvements in autofs Version 5 over Version 4 options (client configuration, mounting), Configuring NFS Client overriding/augmenting site configuration files (autofs), Configuring autofs proper nsswitch configuration (autofs version 5), use of, Improvements in autofs Version 5 over Version 4 RDMA, Enabling NFS over RDMA (NFSoRDMA) reloading, Starting and Stopping the NFS Server required services, Required Services restarting, Starting and Stopping the NFS Server rfc2307bis (autofs), Using LDAP to Store Automounter Maps rpcbind , NFS and rpcbind security, Securing NFS file permissions, File Permissions NFSv3 host access, NFS Security with AUTH_SYS and Export Controls NFSv4 host access, NFS Security with AUTH_GSS server (client configuration, mounting), Configuring NFS Client server configuration, Configuring the NFS Server /etc/exports , The /etc/exports Configuration File exportfs command, The exportfs Command exportfs command with NFSv4, Using exportfs with NFSv4 starting, Starting and Stopping the NFS Server status, Starting and Stopping the NFS Server stopping, Starting and Stopping the NFS Server storing automounter maps, using LDAP to store (autofs), Overriding or Augmenting Site Configuration Files TCP, Introduction to NFS troubleshooting NFS and rpcbind, Troubleshooting NFS and rpcbind UDP, Introduction to NFS write barriers, NFS NFS (cache limitations with) FS-Cache, Cache Limitations with NFS NFS (using with) FS-Cache, Using the Cache with NFS nobarrier mount option ext4, Mounting an ext4 File System XFS, Write Barriers NOP-Out requests modifying link loss iSCSI configuration, NOP-Out Interval/Timeout NOP-Outs (disabling) iSCSI configuration, iSCSI Root O offline status Linux SCSI layer, Controlling the SCSI Command Timer and Device Status offload and interface binding iSCSI, Configuring iSCSI Offload and Interface Binding Online logical units Changing the read/write state, Changing the Read/Write State of an Online Logical Unit online storage Fibre Channel, Fibre Channel overview, Online Storage Management sysfs, Online Storage Management troubleshooting, Troubleshooting Online Storage Configuration opt directory, The /opt/ Directory options (client configuration, mounting) NFS, Configuring NFS Client other file system utilities ext4, Other ext4 File System Utilities overriding/augmenting site configuration files (autofs) NFS, Configuring autofs overview, Overview online storage, Online Storage Management P Parallel NFS pNFS, pNFS parameters for storage access I/O alignment and size, Parameters for Storage Access parity RAID, RAID Levels and Linear Support parted , Partitions creating partitions, Creating a Partition overview, Partitions removing partitions, Removing a Partition resizing partitions, Resizing a Partition with fdisk selecting device, Viewing the Partition Table table of commands, Partitions viewing partition table, Viewing the Partition Table partition table viewing, Viewing the Partition Table partitions creating, Creating a Partition formatting mkfs , Formatting and Labeling the Partition removing, Removing a Partition resizing, Resizing a Partition with fdisk viewing list, Viewing the Partition Table path to storage devices, adding, Adding a Storage Device or Path path to storage devices, removing, Removing a Path to a Storage Device performance guarantee FS-Cache, Performance Guarantee persistent naming, Persistent Naming pNFS Parallel NFS, pNFS port states (remote), determining Fibre Channel modifying link loss behavior, Fibre Channel pquota/pqnoenforce XFS, XFS Quota Management private mount, Sharing Mounts proc directory, The /proc/ Directory project limits (setting) XFS, Setting Project Limits proper nsswitch configuration (autofs version 5), use of NFS, Improvements in autofs Version 5 over Version 4 Q queue_if_no_path iSCSI configuration, iSCSI Settings with dm-multipath modifying link loss iSCSI configuration, replacement_timeout quota (other ext4 file system utilities) ext4, Other ext4 File System Utilities quota management XFS, XFS Quota Management quotacheck , Creating the Quota Database Files quotacheck command checking quota accuracy with, Keeping Quotas Accurate quotaoff , Enabling and Disabling quotaon , Enabling and Disabling R RAID advanced RAID device creation, Creating Advanced RAID Devices Anaconda support, RAID Support in the Anaconda Installer configuring RAID sets, Configuring RAID Sets dmraid, dmraid dmraid (configuring RAID sets), dmraid Hardware RAID, RAID Types hardware RAID controller drivers, Linux Hardware RAID Controller Drivers installer support, RAID Support in the Anaconda Installer level 0, RAID Levels and Linear Support level 1, RAID Levels and Linear Support level 4, RAID Levels and Linear Support level 5, RAID Levels and Linear Support levels, RAID Levels and Linear Support linear RAID, RAID Levels and Linear Support mdadm (configuring RAID sets), mdadm mdraid, mdraid mirroring, RAID Levels and Linear Support parity, RAID Levels and Linear Support reasons to use, Redundant Array of Independent Disks (RAID) Software RAID, RAID Types striping, RAID Levels and Linear Support subsystems of RAID, Linux RAID Subsystems RDMA NFS, Enabling NFS over RDMA (NFSoRDMA) READ CAPACITY(16) I/O alignment and size, SCSI record types discovery iSCSI, iSCSI Discovery Configuration Red Hat Enterprise Linux-specific file locations /etc/sysconfig/, Special Red Hat Enterprise Linux File Locations (see also sysconfig directory) /var/cache/yum, Special Red Hat Enterprise Linux File Locations /var/lib/rpm/, Special Red Hat Enterprise Linux File Locations remote diskless systems diskless systems, Setting up a Remote Diskless System remote port Fibre Channel API, Fibre Channel API remote port states, determining Fibre Channel modifying link loss behavior, Fibre Channel removing devices, Removing a Storage Device removing paths to a storage device, Removing a Path to a Storage Device repairing file system XFS, Repairing an XFS File System repairing XFS file systems with dirty logs XFS, Repairing an XFS File System replacement_timeout modifying link loss iSCSI configuration, SCSI Error Handler , replacement_timeout replacement_timeoutM iSCSI configuration, iSCSI Root report (xfs_quota expert mode) XFS, XFS Quota Management required packages adding/removing LUN (logical unit number), Adding/Removing a Logical Unit Through rescan-scsi-bus.sh diskless systems, Setting up a Remote Diskless System FCoE, Configuring a Fibre Channel over Ethernet Interface rescan-scsi-bus.sh adding/removing LUN (logical unit number), Adding/Removing a Logical Unit Through rescan-scsi-bus.sh resize2fs, Reverting to an Ext2 File System resize2fs (resizing ext4) ext4, Resizing an ext4 File System resized logical units, resizing, Resizing an Online Logical Unit resizing ext4, Resizing an ext4 File System resizing an iSCSI logical unit, Resizing an iSCSI Logical Unit resizing resized logical units, Resizing an Online Logical Unit restoring a backup XFS, Restoration rfc2307bis (autofs) NFS, Using LDAP to Store Automounter Maps rpcbind , NFS and rpcbind (see also NFS) NFS, Troubleshooting NFS and rpcbind rpcinfo , Troubleshooting NFS and rpcbind status, Starting and Stopping the NFS Server rpcinfo , Troubleshooting NFS and rpcbind running sessions, retrieving information about iSCSI API, iSCSI API running status Linux SCSI layer, Controlling the SCSI Command Timer and Device Status S scanning interconnects iSCSI, Scanning iSCSI Interconnects scanning storage interconnects, Scanning Storage Interconnects SCSI command timer Linux SCSI layer, Command Timer SCSI Error Handler modifying link loss iSCSI configuration, SCSI Error Handler SCSI standards I/O alignment and size, SCSI separate partitions (for /home, /opt, /usr/local) storage considerations during installation, Separate Partitions for /home, /opt, /usr/local server (client configuration, mounting) NFS, Configuring NFS Client setting up a cache FS-Cache, Setting up a Cache shared mount, Sharing Mounts shared subtrees, Sharing Mounts private mount, Sharing Mounts shared mount, Sharing Mounts slave mount, Sharing Mounts unbindable mount, Sharing Mounts simple mode (xfsrestore) XFS, Restoration slave mount, Sharing Mounts SMB (see SMB) software iSCSI iSCSI, Configuring an iface for Software iSCSI offload and interface binding iSCSI, Configuring an iface for Software iSCSI Software RAID (see RAID) solid-state disks deployment, Solid-State Disk Deployment Guidelines deployment guidelines, Solid-State Disk Deployment Guidelines SSD, Solid-State Disk Deployment Guidelines throughput classes, Solid-State Disk Deployment Guidelines TRIM command, Solid-State Disk Deployment Guidelines specific session timeouts, configuring iSCSI configuration, Configuring Timeouts for a Specific Session srv directory, The /srv/ Directory SSD solid-state disks, Solid-State Disk Deployment Guidelines SSM System Storage Manager, System Storage Manager (SSM) Back Ends, SSM Back Ends Installation, Installing SSM list command, Displaying Information about All Detected Devices resize command, Increasing a Volume's Size snapshot command, Snapshot stacking I/O parameters I/O alignment and size, Stacking I/O Parameters stale BIOS RAID metadata storage considerations during installation, Stale BIOS RAID Metadata statistical information (tracking) FS-Cache, Statistical Information storage access parameters I/O alignment and size, Parameters for Storage Access storage considerations during installation channel command word (CCW), DASD and zFCP Devices on IBM System Z DASD and zFCP devices on IBM System z, DASD and zFCP Devices on IBM System Z DIF/DIX-enabled block devices, Block Devices with DIF/DIX Enabled iSCSI detection and configuration, iSCSI Detection and Configuration LUKS/dm-crypt, encrypting block devices using, Encrypting Block Devices Using LUKS separate partitions (for /home, /opt, /usr/local), Separate Partitions for /home, /opt, /usr/local stale BIOS RAID metadata, Stale BIOS RAID Metadata updates, Storage Considerations During Installation what's new, Storage Considerations During Installation Storage for Virtual Machines, Storage for Virtual Machines storage interconnects, scanning, Scanning Storage Interconnects storing automounter maps, using LDAP to store (autofs) NFS, Overriding or Augmenting Site Configuration Files stride (specifying stripe geometry) ext4, Creating an ext4 File System stripe geometry ext4, Creating an ext4 File System stripe-width (specifying stripe geometry) ext4, Creating an ext4 File System striping RAID, RAID Levels and Linear Support RAID fundamentals, Redundant Array of Independent Disks (RAID) su (mkfs.xfs sub-options) XFS, Creating an XFS File System subsystems of RAID RAID, Linux RAID Subsystems suspending XFS, Suspending an XFS File System sw (mkfs.xfs sub-options) XFS, Creating an XFS File System swap space, Swap Space creating, Adding Swap Space expanding, Adding Swap Space file creating, Creating a Swap File , Removing a Swap File LVM2 creating, Creating an LVM2 Logical Volume for Swap extending, Extending Swap on an LVM2 Logical Volume reducing, Reducing Swap on an LVM2 Logical Volume removing, Removing an LVM2 Logical Volume for Swap moving, Moving Swap Space recommended size, Swap Space removing, Removing Swap Space sys directory, The /sys/ Directory sysconfig directory, Special Red Hat Enterprise Linux File Locations sysfs overview online storage, Online Storage Management sysfs interface (userspace access) I/O alignment and size, sysfs Interface system information file systems, Gathering File System Information /dev/shm, df Command System Storage Manager SSM, System Storage Manager (SSM) Back Ends, SSM Back Ends Installation, Installing SSM list command, Displaying Information about All Detected Devices resize command, Increasing a Volume's Size snapshot command, Snapshot T targets iSCSI, Logging in to an iSCSI Target tftp service, configuring diskless systems, Configuring a tftp Service for Diskless Clients throughput classes solid-state disks, Solid-State Disk Deployment Guidelines timeouts for a specific session, configuring iSCSI configuration, Configuring Timeouts for a Specific Session tools (for partitioning and other file system functions) I/O alignment and size, Partition and File System Tools tracking statistical information FS-Cache, Statistical Information transport Fibre Channel API, Fibre Channel API TRIM command solid-state disks, Solid-State Disk Deployment Guidelines troubleshooting online storage, Troubleshooting Online Storage Configuration troubleshooting NFS and rpcbind NFS, Troubleshooting NFS and rpcbind tune2fs converting to ext3 with, Converting to an ext3 File System reverting to ext2 with, Reverting to an Ext2 File System tune2fs (mounting) ext4, Mounting an ext4 File System tune2fs (setting up a cache) FS-Cache, Setting up a Cache U udev rule (timeout) command timer (SCSI), Command Timer umount, Unmounting a File System unbindable mount, Sharing Mounts unmounting, Unmounting a File System updates storage considerations during installation, Storage Considerations During Installation uquota/uqnoenforce XFS, XFS Quota Management userspace access I/O alignment and size, Userspace Access userspace API files Fibre Channel API, Fibre Channel API usr directory, The /usr/ Directory V var directory, The /var/ Directory var/lib/rpm/ directory, Special Red Hat Enterprise Linux File Locations var/spool/up2date/ directory, Special Red Hat Enterprise Linux File Locations verifying if a device is blocked Fibre Channel modifying link loss behavior, Fibre Channel version what is new autofs, Improvements in autofs Version 5 over Version 4 viewing available iface configurations offload and interface binding iSCSI, Viewing Available iface Configurations virtual file system (/proc) /proc/devices, The /proc Virtual File System /proc/filesystems, The /proc Virtual File System /proc/mdstat, The /proc Virtual File System /proc/mounts, The /proc Virtual File System /proc/mounts/, The /proc Virtual File System /proc/partitions, The /proc Virtual File System volume_key commands, volume_key Commands individual user, Using volume_key as an Individual User W what's new storage considerations during installation, Storage Considerations During Installation World Wide Identifier (WWID) persistent naming, World Wide Identifier (WWID) write barriers battery-backed write caches, Battery-Backed Write Caches definition, Write Barriers disabling write caches, Disabling Write Caches enablind/disabling, Enabling and Disabling Write Barriers error messages, Enabling and Disabling Write Barriers ext4, Mounting an ext4 File System high-end arrays, High-End Arrays how write barriers work, How Write Barriers Work importance of write barriers, Importance of Write Barriers NFS, NFS XFS, Write Barriers write caches, disabling write barriers, Disabling Write Caches WWID persistent naming, World Wide Identifier (WWID) X XFS allocation features, The XFS File System backup/restoration, Backing Up and Restoring XFS File Systems creating, Creating an XFS File System cumulative mode (xfsrestore), Restoration dump levels, Backup expert mode (xfs_quota), XFS Quota Management file system types, The XFS File System fsync(), The XFS File System gquota/gqnoenforce, XFS Quota Management increasing file system size, Increasing the Size of an XFS File System interactive operation (xfsrestore), Restoration limit (xfs_quota expert mode), XFS Quota Management main features, The XFS File System mkfs.xfs, Creating an XFS File System mounting, Mounting an XFS File System nobarrier mount option, Write Barriers pquota/pqnoenforce, XFS Quota Management project limits (setting), Setting Project Limits quota management, XFS Quota Management repairing file system, Repairing an XFS File System repairing XFS file systems with dirty logs, Repairing an XFS File System report (xfs_quota expert mode), XFS Quota Management simple mode (xfsrestore), Restoration su (mkfs.xfs sub-options), Creating an XFS File System suspending, Suspending an XFS File System sw (mkfs.xfs sub-options), Creating an XFS File System uquota/uqnoenforce, XFS Quota Management write barriers, Write Barriers xfsdump, Backup xfsprogs, Suspending an XFS File System xfsrestore, Restoration xfs_admin, Other XFS File System Utilities xfs_bmap, Other XFS File System Utilities xfs_copy, Other XFS File System Utilities xfs_db, Other XFS File System Utilities xfs_freeze, Suspending an XFS File System xfs_fsr, Other XFS File System Utilities xfs_growfs, Increasing the Size of an XFS File System xfs_info, Other XFS File System Utilities xfs_mdrestore, Other XFS File System Utilities xfs_metadump, Other XFS File System Utilities xfs_quota, XFS Quota Management xfs_repair, Repairing an XFS File System xfsdump XFS, Backup xfsprogs XFS, Suspending an XFS File System xfsrestore XFS, Restoration xfs_admin XFS, Other XFS File System Utilities xfs_bmap XFS, Other XFS File System Utilities xfs_copy XFS, Other XFS File System Utilities xfs_db XFS, Other XFS File System Utilities xfs_freeze XFS, Suspending an XFS File System xfs_fsr XFS, Other XFS File System Utilities xfs_growfs XFS, Increasing the Size of an XFS File System xfs_info XFS, Other XFS File System Utilities xfs_mdrestore XFS, Other XFS File System Utilities xfs_metadump XFS, Other XFS File System Utilities xfs_quota XFS, XFS Quota Management xfs_repair XFS, Repairing an XFS File System
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/ix01
Chapter 2. Configuring an Azure Stack Hub account
Chapter 2. Configuring an Azure Stack Hub account Before you can install OpenShift Container Platform, you must configure a Microsoft Azure account. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 2.1. Azure Stack Hub account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure Stack Hub components, and the default Quota types in Azure Stack Hub affect your ability to install OpenShift Container Platform clusters. The following table summarizes the Azure Stack Hub components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Description vCPU 56 A default cluster requires 56 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap, control plane, and worker machines use Standard_DS4_v2 virtual machines, which use 8 vCPUs, a default cluster requires 56 vCPUs. The bootstrap node VM is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. VNet 1 Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 2 The public load balancer uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Additional resources Optimizing storage . 2.2. Configuring a DNS zone in Azure Stack Hub To successfully install OpenShift Container Platform on Azure Stack Hub, you must create DNS records in an Azure Stack Hub DNS zone. The DNS zone must be authoritative for the domain. To delegate a registrar's DNS zone to Azure Stack Hub, see Microsoft's documentation for Azure Stack Hub datacenter DNS integration . 2.3. Required Azure Stack Hub roles Your Microsoft Azure Stack Hub account must have the following roles for the subscription that you use: Owner To set roles on the Azure portal, see the Manage access to resources in Azure Stack Hub with role-based access control in the Microsoft documentation. 2.4. Creating a service principal Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Your Azure account has the required roles for the subscription that you use. Procedure Register your environment: USD az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1 1 Specify the Azure Resource Manager endpoint, `https://management.<region>.<fqdn>/`. See the Microsoft documentation for details. Set the active environment: USD az cloud set -n AzureStackCloud Update your environment configuration to use the specific API version for Azure Stack Hub: USD az cloud update --profile 2019-03-01-hybrid Log in to the Azure CLI: USD az login If you are in a multitenant environment, you must also supply the tenant ID. If your Azure account uses subscriptions, ensure that you are using the right subscription: View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the correct subscription ID. If you are not using the right subscription, change the active subscription: USD az account set -s <subscription_id> 1 1 Specify the subscription ID. Verify the subscription ID update: USD az account show Example output { "environmentName": AzureStackCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the tenantId and id parameter values from the output. You need these values during the OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role Contributor --name <service_principal> \ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3 1 Specify the service principal name. 2 Specify the subscription ID. 3 Specify the number of years. By default, a service principal expires in one year. By using the --years option you can extend the validity of your service principal. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. Additional resources For more information about CCO modes, see About the Cloud Credential Operator . 2.5. steps Install an OpenShift Container Platform cluster: Installing a cluster quickly on Azure Stack Hub . Install an OpenShift Container Platform cluster on Azure Stack Hub with user-provisioned infrastructure by following Installing a cluster on Azure Stack Hub using ARM templates .
[ "az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1", "az cloud set -n AzureStackCloud", "az cloud update --profile 2019-03-01-hybrid", "az login", "az account list --refresh", "[ { \"cloudName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]", "az account show", "{ \"environmentName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az account set -s <subscription_id> 1", "az account show", "{ \"environmentName\": AzureStackCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az ad sp create-for-rbac --role Contributor --name <service_principal> \\ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3", "Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_azure_stack_hub/installing-azure-stack-hub-account
Chapter 1. Support policy for Red Hat build of OpenJDK
Chapter 1. Support policy for Red Hat build of OpenJDK Red Hat will support select major versions of Red Hat build of OpenJDK in its products. For consistency, these are the same versions that Oracle designates as long-term support (LTS) for the Oracle JDK. A major version of Red Hat build of OpenJDK will be supported for a minimum of six years from the time that version is first introduced. For more information, see the OpenJDK Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Red Hat build of OpenJDK is not supporting RHEL 6 as a supported configuration.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.13/rn-openjdk-support-policy
28.4. Automating the Installation with Kickstart
28.4. Automating the Installation with Kickstart You can allow an installation to run unattended by using Kickstart. A Kickstart file specifies settings for an installation. Once the installation system boots, it can read a Kickstart file and carry out the installation process without any further input from a user. Note The Red Hat Enterprise Linux installation process automatically writes a Kickstart file that contains the settings for the installed system. This file is always saved as /root/anaconda-ks.cfg . You may use this file to repeat the installation with identical settings, or modify copies to specify settings for other systems. Important Firstboot does not run after a system is installed from a Kickstart file unless a desktop and the X Window System were included in the installation and graphical login was enabled. Either specify a user with the user option in the Kickstart file before installing additional systems from it (refer to Section 32.4, "Kickstart Options" for details) or log into the installed system with a virtual console as root and add users with the adduser command. Red Hat Enterprise Linux includes a graphical application to create and modify Kickstart files by selecting the options that you require. Use the package system-config-kickstart to install this utility. To load the Red Hat Enterprise Linux Kickstart editor, choose Applications System Tools Kickstart . Kickstart files list installation settings in plain text, with one option per line. This format lets you modify your Kickstart files with any text editor, and write scripts or applications that generate custom Kickstart files for your systems. To automate the installation process with a Kickstart file, use the ks option to specify the name and location of the file: You may use Kickstart files that are held on either removable storage, a hard drive, or a network server. Refer to Table 28.2, "Kickstart sources" for the supported Kickstart sources. Table 28.2. Kickstart sources Kickstart source Option format DVD drive ks= cdrom:/directory/ks.cfg Hard Drive ks= hd:/device/directory/ks.cfg Other Device ks= file:/device/directory/ks.cfg HTTP Server ks= http://server.mydomain.com/directory/ks.cfg HTTPS Server ks= https://server.mydomain.com/directory/ks.cfg FTP Server ks= ftp://server.mydomain.com/directory/ks.cfg NFS Server ks= nfs:server.mydomain.com:/directory/ks.cfg Important You can use a device name such as /dev/sdb to identify a hard drive or a USB drive containing a Kickstart file. However, there is no guarantee that the device identifier will remain the same on multiple systems. Therefore, the recommended method for specifying a hard drive or a USB drive in Kickstart installations is by UUID. For example: You can determine a device's UUID by using the blkid command as root : To obtain a Kickstart file from a script or application on a Web server, specify the URL of the application with the ks= option. If you add the option kssendmac , the request also sends HTTP headers to the Web application. Your application can use these headers to identify the computer. This line sends a request with headers to the application http://server.mydomain.com/kickstart.cgi :
[ "linux ks= location/kickstart-file.cfg", "ks=hd:UUID=ede47e6c-8b5f-49ad-9509-774fa7119281:ks.cfg", "blkid /dev/sdb1 /dev/sdb1: UUID=\"2c3a072a-3d0c-4f3a-a4a1-ab5f24f59266\" TYPE=\"ext4\"", "linux ks=http://server.mydomain.com/kickstart.cgi kssendmac" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-automating-installation
Preface
Preface If you're using GitLab CI for your application, pipeline runs may fail due to missing secrets. Without them, integrations with Quay, JFrog, and Red Hat Advanced Cluster Security (ACS) won't work, breaking security tasks like vulnerability scanning, image signing, and SBOM generation for compliance. To prevent this, you need to securely store secrets in GitLab CI. This guide walks you through the process, ensuring your pipelines run smoothly and securely.
null
https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.4/html/configuring_gitlab_ci/pr01
Chapter 7. OperatorCondition [operators.coreos.com/v2]
Chapter 7. OperatorCondition [operators.coreos.com/v2] Description OperatorCondition is a Custom Resource of type OperatorCondition which is used to convey information to OLM about the state of an operator. Type object Required metadata 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object OperatorConditionSpec allows an operator to report state to OLM and provides cluster admin with the ability to manually override state reported by the operator. status object OperatorConditionStatus allows OLM to convey which conditions have been observed. 7.1.1. .spec Description OperatorConditionSpec allows an operator to report state to OLM and provides cluster admin with the ability to manually override state reported by the operator. Type object Property Type Description conditions array conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } deployments array (string) overrides array overrides[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } serviceAccounts array (string) 7.1.2. .spec.conditions Description Type array 7.1.3. .spec.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 7.1.4. .spec.overrides Description Type array 7.1.5. .spec.overrides[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 7.1.6. .status Description OperatorConditionStatus allows OLM to convey which conditions have been observed. Type object Property Type Description conditions array conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } 7.1.7. .status.conditions Description Type array 7.1.8. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 7.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v2/operatorconditions GET : list objects of kind OperatorCondition /apis/operators.coreos.com/v2/namespaces/{namespace}/operatorconditions DELETE : delete collection of OperatorCondition GET : list objects of kind OperatorCondition POST : create an OperatorCondition /apis/operators.coreos.com/v2/namespaces/{namespace}/operatorconditions/{name} DELETE : delete an OperatorCondition GET : read the specified OperatorCondition PATCH : partially update the specified OperatorCondition PUT : replace the specified OperatorCondition /apis/operators.coreos.com/v2/namespaces/{namespace}/operatorconditions/{name}/status GET : read status of the specified OperatorCondition PATCH : partially update status of the specified OperatorCondition PUT : replace status of the specified OperatorCondition 7.2.1. /apis/operators.coreos.com/v2/operatorconditions Table 7.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind OperatorCondition Table 7.2. HTTP responses HTTP code Reponse body 200 - OK OperatorConditionList schema 401 - Unauthorized Empty 7.2.2. /apis/operators.coreos.com/v2/namespaces/{namespace}/operatorconditions Table 7.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 7.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of OperatorCondition Table 7.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OperatorCondition Table 7.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.8. HTTP responses HTTP code Reponse body 200 - OK OperatorConditionList schema 401 - Unauthorized Empty HTTP method POST Description create an OperatorCondition Table 7.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.10. Body parameters Parameter Type Description body OperatorCondition schema Table 7.11. HTTP responses HTTP code Reponse body 200 - OK OperatorCondition schema 201 - Created OperatorCondition schema 202 - Accepted OperatorCondition schema 401 - Unauthorized Empty 7.2.3. /apis/operators.coreos.com/v2/namespaces/{namespace}/operatorconditions/{name} Table 7.12. Global path parameters Parameter Type Description name string name of the OperatorCondition namespace string object name and auth scope, such as for teams and projects Table 7.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an OperatorCondition Table 7.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 7.15. Body parameters Parameter Type Description body DeleteOptions schema Table 7.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OperatorCondition Table 7.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 7.18. HTTP responses HTTP code Reponse body 200 - OK OperatorCondition schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OperatorCondition Table 7.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.20. Body parameters Parameter Type Description body Patch schema Table 7.21. HTTP responses HTTP code Reponse body 200 - OK OperatorCondition schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OperatorCondition Table 7.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.23. Body parameters Parameter Type Description body OperatorCondition schema Table 7.24. HTTP responses HTTP code Reponse body 200 - OK OperatorCondition schema 201 - Created OperatorCondition schema 401 - Unauthorized Empty 7.2.4. /apis/operators.coreos.com/v2/namespaces/{namespace}/operatorconditions/{name}/status Table 7.25. Global path parameters Parameter Type Description name string name of the OperatorCondition namespace string object name and auth scope, such as for teams and projects Table 7.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified OperatorCondition Table 7.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 7.28. HTTP responses HTTP code Reponse body 200 - OK OperatorCondition schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OperatorCondition Table 7.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.30. Body parameters Parameter Type Description body Patch schema Table 7.31. HTTP responses HTTP code Reponse body 200 - OK OperatorCondition schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OperatorCondition Table 7.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.33. Body parameters Parameter Type Description body OperatorCondition schema Table 7.34. HTTP responses HTTP code Reponse body 200 - OK OperatorCondition schema 201 - Created OperatorCondition schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operatorhub_apis/operatorcondition-operators-coreos-com-v2
Chapter 8. Frequently asked questions
Chapter 8. Frequently asked questions Is it possible to deploy applications from OpenShift Dev Spaces to an OpenShift cluster? The user must log in to the OpenShift cluster from their running workspace using oc login . For best performance, what is the recommended storage to use for Persistent Volumes used with OpenShift Dev Spaces? Use block storage. Is it possible to deploy more than one OpenShift Dev Spaces instance on the same cluster? Only one OpenShift Dev Spaces instance can be deployed per cluster. Is it possible to install OpenShift Dev Spaces offline (that is, disconnected from the internet)? See Installing Red Hat OpenShift Dev Spaces in restricted environments on OpenShift . Is it possible to use non-default certificates with OpenShift Dev Spaces? You can use self-signed or public certificates. See Importing untrusted TLS certificates . Is it possible to run multiple workspaces simultaneously? See Enabling users to run multiple workspaces simultaneously .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.15/html/release_notes_and_known_issues/frequently-asked-questions_devspaces
Chapter 7. Configuring instance scheduling and placement
Chapter 7. Configuring instance scheduling and placement The Compute scheduler service determines on which Compute node or host aggregate to place an instance. When the Compute (nova) service receives a request to launch or move an instance, it uses the specifications provided in the request, the flavor, and the image to find a suitable host. For example, a flavor can specify the traits an instance requires a host to have, such as the type of storage disk, or the Intel CPU instruction set extension. The Compute scheduler service uses the configuration of the following components, in the following order, to determine on which Compute node to launch or move an instance: Placement service prefilters : The Compute scheduler service uses the Placement service to filter the set of candidate Compute nodes based on specific attributes. For example, the Placement service automatically excludes disabled Compute nodes. Filters : Used by the Compute scheduler service to determine the initial set of Compute nodes on which to launch an instance. Weights : The Compute scheduler service prioritizes the filtered Compute nodes using a weighting system. The highest weight has the highest priority. In the following diagram, hosts 1 and 3 are eligible after filtering. Host 1 has the highest weight and therefore has the highest priority for scheduling. 7.1. Prefiltering using the Placement service The Compute service (nova) interacts with the Placement service when it creates and manages instances. The Placement service tracks the inventory and use of resource providers, such as a Compute node, a shared storage pool, or an IP allocation pool, and their available quantitative resources, such as the available vCPUs. Any service that needs to manage the selection and consumption of resources can use the Placement service. The Placement service also tracks the mapping of available qualitative resources to resource providers, such as the type of storage disk trait a resource provider has. The Placement service applies prefilters to the set of candidate Compute nodes based on Placement service resource provider inventories and traits. You can create prefilters based on the following criteria: Supported image types Traits Projects or tenants Availability zone 7.1.1. Filtering by requested image type support You can exclude Compute nodes that do not support the disk format of the image used to launch an instance. This is useful when your environment uses Red Hat Ceph Storage as an ephemeral backend, which does not support QCOW2 images. Enabling this feature ensures that the scheduler does not send requests to launch instances using a QCOW2 image to Compute nodes backed by Red Hat Ceph Storage. Procedure Open your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml , on your workstation. Add the customServiceConfig parameter to the Compute scheduler ( nova-scheduler ) template, schedulerServiceTemplate , to configure the Compute scheduler service to filter by requested image type support: Update the control plane: Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status: The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. Optional: Confirm that the control plane is deployed by reviewing the pods in the openstack namespace for each of the cells you created. The control plane is deployed when all the pods are either completed or running. 7.1.2. Filtering by resource provider traits Each resource provider has a set of traits. Traits are the qualitative aspects of a resource provider, for example, the type of storage disk, or the Intel CPU instruction set extension. The Compute node reports its capabilities to the Placement service as traits. An instance can specify which of these traits it requires, or which traits the resource provider must not have. The Compute scheduler can use these traits to identify a suitable Compute node or host aggregate to host an instance. To enable your cloud users to create instances on hosts that have particular traits, you can define a flavor that requires or forbids a particular trait, and you can create an image that requires or forbids a particular trait. For a list of the available traits, see the os-traits library . You can also create custom traits, as required. Additional resources Section 7.5, "Declaring custom traits and resource classes" 7.1.2.1. Creating an image that requires or forbids a resource provider trait You can create an instance image that your cloud users can use to launch instances on hosts that have particular traits. Prerequisites You installed the oc and podman command line tools on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Change to the cloud-admin home directory: Create a new image: Identify the trait you require a host or host aggregate to have. You can select an existing trait, or create a new trait: To use an existing trait, list the existing traits to retrieve the trait name: To create a new trait, enter the following command: Custom traits must begin with the prefix CUSTOM_ and contain only the letters A through Z, the numbers 0 through 9 and the underscore "_" character. Collect the existing resource provider traits of each host: Check the existing resource provider traits for the traits you require a host or host aggregate to have: If the traits you require are not already added to the resource provider, then add the existing traits and your required traits to the resource providers for each host: Replace <TRAIT_NAME> with the name of the trait that you want to add to the resource provider. You can use the --trait option more than once to add additional traits, as required. Note This command performs a full replacement of the traits for the resource provider. Therefore, you must retrieve the list of existing resource provider traits on the host and set them again to prevent them from being removed. To schedule instances on a host or host aggregate that has a required trait, add the trait to the image extra specs. For example, to schedule instances on a host or host aggregate that supports AVX-512, add the following trait to the image extra specs: To filter out hosts or host aggregates that have a forbidden trait, add the trait to the image extra specs. For example, to prevent instances from being scheduled on a host or host aggregate that supports multi-attach volumes, add the following trait to the image extra specs: Exit the openstackclient pod: 7.1.2.2. Creating a flavor that requires or forbids a resource provider trait You can create flavors that your cloud users can use to launch instances on hosts that have particular traits. Prerequisites You installed the oc and podman command line tools on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Change to the cloud-admin home directory: Create a flavor: Identify the trait you require a host or host aggregate to have. You can select an existing trait, or create a new trait: To use an existing trait, list the existing traits to retrieve the trait name: To create a new trait, enter the following command: Custom traits must begin with the prefix CUSTOM_ and contain only the letters A through Z, the numbers 0 through 9 and the underscore "_" character. Collect the existing resource provider traits of each host: Check the existing resource provider traits for the traits you require a host or host aggregate to have: If the traits you require are not already added to the resource provider, then add the existing traits and your required traits to the resource providers for each host: Replace <TRAIT_NAME> with the name of the trait that you want to add to the resource provider. You can use the --trait option more than once to add additional traits, as required. Note This command performs a full replacement of the traits for the resource provider. Therefore, you must retrieve the list of existing resource provider traits on the host and set them again to prevent them from being removed. To schedule instances on a host or host aggregate that has a required trait, add the trait to the flavor extra specs. For example, to schedule instances on a host or host aggregate that supports AVX-512, add the following trait to the flavor extra specs: To filter out hosts or host aggregates that have a forbidden trait, add the trait to the flavor extra specs. For example, to prevent instances from being scheduled on a host or host aggregate that supports multi-attach volumes, add the following trait to the flavor extra specs: Exit the openstackclient pod: 7.1.3. Filtering by isolating host aggregates You can restrict scheduling on a host aggregate to only those instances whose flavor and image traits match the metadata of the host aggregate. The combination of flavor and image metadata must require all the host aggregate traits to be eligible for scheduling on Compute nodes in that host aggregate. Prerequisites You installed oc and podman command line tools on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Change to the cloud-admin home directory: Open your Compute environment file. To isolate host aggregates to host only instances whose flavor and image traits match the aggregate metadata, set the NovaSchedulerEnableIsolatedAggregateFiltering parameter to True in the Compute environment file. Save the updates to your Compute environment file. Add your Compute environment file to the stack with your other environment files and deploy the data plane: Identify the traits you want to isolate the host aggregate for. You can select an existing trait, or create a new trait: To use an existing trait, list the existing traits to retrieve the trait name: To create a new trait, enter the following command: Custom traits must begin with the prefix CUSTOM_ and contain only the letters A through Z, the numbers 0 through 9 and the underscore "_" character. Collect the existing resource provider traits of each Compute node: Check the existing resource provider traits for the traits you want to isolate the host aggregate for: If the traits you require are not already added to the resource provider, then add the existing traits and your required traits to the resource providers for each Compute node in the host aggregate: Replace <TRAIT_NAME> with the name of the trait that you want to add to the resource provider. You can use the --trait option more than once to add additional traits, as required. Note This command performs a full replacement of the traits for the resource provider. Therefore, you must retrieve the list of existing resource provider traits on the host and set them again to prevent them from being removed. Repeat steps 6 - 8 for each Compute node in the host aggregate. Add the metadata property for the trait to the host aggregate: Add the trait to a flavor or an image: Exit the openstackclient pod: 7.2. Configuring filters and weights for the Compute scheduler service You need to configure the filters and weights for the Compute scheduler service to determine the initial set of Compute nodes on which to launch an instance. Procedure On your workstation, open your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml . Add the filters that you want the scheduler to use to the [filter_scheduler] enabled_filters parameter, for example: Specify which attribute to use to calculate the weight of each Compute node, for example: For more information on the available attributes, see Compute scheduler weights . Optional: Configure the multiplier to apply to each weigher. For example, to specify that the available RAM of a Compute node has a higher weight than the other default weighers, and that the Compute scheduler prefers Compute nodes with more available RAM over those nodes with less available RAM, use the following configuration: Tip You can also set multipliers to a negative value. In the above example, to prefer Compute nodes with less available RAM over those nodes with more available RAM, set ram_weight_multiplier to -2.0 . Update the control plane: After the RHOCP creates the resources related to the OpenStackControlPlane CR, run the following command to check the status: The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. Optional: Confirm that the control plane is deployed by reviewing the pods in the openstack namespace for each of the cells that you created. The control plane is deployed when all the pods are either completed or running. Additional resources For a list of the available Compute scheduler service filters, see Compute scheduler filters . For a list of the available weight configuration options, see Compute scheduler weights . 7.3. Compute scheduler filters You configure the enabled_filters parameter in your Compute environment file to specify the filters the Compute scheduler must apply when selecting an appropriate Compute node to host an instance. The default configuration applies the following filters: ComputeFilter : The Compute node can service the request. ComputeCapabilitiesFilter : The Compute node satisfies the flavor extra specs. ImagePropertiesFilter : The Compute node satisfies the requested image properties. ServerGroupAntiAffinityFilter : The Compute node is not already hosting an instance in a specified group. ServerGroupAffinityFilter : The Compute node is already hosting instances in a specified group. SameHostFilter : The Compute node can schedule an instance on the same Compute node as a set of specific instances. DifferentHostFilter : The Compute host can schedule an instance on a different Compute node from a set of specific instances. PciPassthroughFilter : The Compute host can schedule instances on Compute nodes that have the devices that the instance requests by using the flavor extra_specs. NUMATopologyFilter : The Compute host can schedule instances with a NUMA topology on NUMA-capable Compute nodes. You can add and remove filters. The following table describes all the available filters. Table 7.1. Compute scheduler filters Filter Description AggregateImagePropertiesIsolation Use this filter to match the image metadata of an instance with host aggregate metadata. If any of the host aggregate metadata matches the metadata of the image, then the Compute nodes that belong to that host aggregate are candidates for launching instances from that image. The scheduler only recognises valid image metadata properties. AggregateInstanceExtraSpecsFilter Use this filter to match namespaced properties defined in the flavor extra specs of an instance with host aggregate metadata. You must scope your flavor extra_specs keys by prefixing them with the aggregate_instance_extra_specs: namespace. If any of the host aggregate metadata matches the metadata of the flavor extra spec, then the Compute nodes that belong to that host aggregate are candidates for launching instances from that image. AggregateIoOpsFilter Use this filter to filter hosts by I/O operations with a per-aggregate filter_scheduler/max_io_ops_per_host value. If the per-aggregate value is not found, the value falls back to the global setting. If the host is in more than one aggregate and more than one value is found, the scheduler uses the minimum value. AggregateMultiTenancyIsolation Use this filter to limit the availability of Compute nodes in project-isolated host aggregates to a specified set of projects. Only projects specified using the filter_tenant_id metadata key can launch instances on Compute nodes in the host aggregate. For more information, see Creating a project-isolated host aggregate . Note The project can still place instances on other hosts. To restrict this, use the NovaSchedulerPlacementAggregateRequiredForTenants parameter. AggregateNumInstancesFilter Use this filter to limit the number of instances each Compute node in an aggregate can host. You can configure the maximum number of instances per-aggregate by using the filter_scheduler/max_instances_per_host parameter. If the per-aggregate value is not found, the value falls back to the global setting. If the Compute node is in more than one aggregate, the scheduler uses the lowest max_instances_per_host value. AggregateTypeAffinityFilter Use this filter to pass hosts if no flavor metadata key is set, or the flavor aggregate metadata value contains the name of the requested flavor. The value of the flavor metadata entry is a string that may contain either a single flavor name or a comma-separated list of flavor names, such as m1.nano or m1.nano,m1.small . AllHostsFilter Use this filter to consider all available Compute nodes for instance scheduling. Note Using this filter does not disable other filters. AvailabilityZoneFilter Use this filter to launch instances on a Compute node in the availability zone specified by the instance. ComputeCapabilitiesFilter Use this filter to match namespaced properties defined in the flavor extra specs of an instance against the Compute node capabilities. You must prefix the flavor extra specs with the capabilities: namespace. A more efficient alternative to using the ComputeCapabilitiesFilter filter is to use CPU traits in your flavors, which are reported to the Placement service. Traits provide consistent naming for CPU features. For more information, see Filtering by using resource provider traits . ComputeFilter Use this filter to pass all Compute nodes that are operational and enabled. This filter should always be present. DifferentHostFilter Use this filter to enable scheduling of an instance on a different Compute node from a set of specific instances. To specify these instances when launching an instance, use the --hint argument with different_host as the key and the instance UUID as the value: ImagePropertiesFilter Use this filter to filter Compute nodes based on the following properties defined on the instance image: hw_architecture - Corresponds to the architecture of the host, for example, x86, ARM, and Power. img_hv_type - Corresponds to the hypervisor type, for example, KVM, QEMU, Xen, and LXC. img_hv_requested_version - Corresponds to the hypervisor version the Compute service reports. hw_vm_mode - Corresponds to the hyperviser type, for example hvm, xen, uml, or exe. Compute nodes that can support the specified image properties contained in the instance are passed to the scheduler. IsolatedHostsFilter Use this filter to only schedule instances with isolated images on isolated Compute nodes. You can also prevent non-isolated images from being used to build instances on isolated Compute nodes by configuring filter_scheduler/restrict_isolated_hosts_to_isolated_images . To specify the isolated set of images and hosts use the filter_scheduler/isolated_hosts and filter_scheduler/isolated_images configuration options, for example: IoOpsFilter Use this filter to filter out hosts that have concurrent I/O operations that exceed the configured filter_scheduler/max_io_ops_per_host , which specifies the maximum number of I/O intensive instances allowed to run on the host. MetricsFilter Use this filter to limit scheduling to Compute nodes that report the metrics configured by using metrics/weight_setting . To use this filter, add the following configuration to your Compute environment file: By default, the Compute scheduler service updates the metrics every 60 seconds. NUMATopologyFilter Use this filter to schedule instances with a NUMA topology on NUMA-capable Compute nodes. Use flavor extra_specs and image properties to specify the NUMA topology for an instance. The filter tries to match the instance NUMA topology to the Compute node topology, taking into consideration the over-subscription limits for each host NUMA cell. NumInstancesFilter Use this filter to filter out Compute nodes that have more instances running than specified by the max_instances_per_host option. PciPassthroughFilter Use this filter to schedule instances on Compute nodes that have the devices that the instance requests by using the flavor extra_specs . Use this filter if you want to reserve nodes with PCI devices, which are typically expensive and limited, for instances that request them. SameHostFilter Use this filter to enable scheduling of an instance on the same Compute node as a set of specific instances. To specify these instances when launching an instance, use the --hint argument with same_host as the key and the instance UUID as the value: ServerGroupAffinityFilter Use this filter to schedule instances in an affinity server group on the same Compute node. To create the server group, enter the following command: To launch an instance in this group, use the --hint argument with group as the key and the group UUID as the value: ServerGroupAntiAffinityFilter Use this filter to schedule instances that belong to an anti-affinity server group on different Compute nodes. To create the server group, enter the following command: To launch an instance in this group, use the --hint argument with group as the key and the group UUID as the value: SimpleCIDRAffinityFilter Use this filter to schedule instances on Compute nodes that have a specific IP subnet range. To specify the required range, use the --hint argument to pass the keys build_near_host_ip and cidr when launching an instance: 7.4. Compute scheduler weights Each Compute node has a weight that the scheduler can use to prioritize instance scheduling. After the Compute scheduler applies the filters, it selects the Compute node with the largest weight from the remaining candidate Compute nodes. The Compute scheduler determines the weight of each Compute node by performing the following tasks: The scheduler normalizes each weight to a value between 0.0 and 1.0. The scheduler multiplies the normalized weight by the weigher multiplier. The Compute scheduler calculates the weight normalization for each resource type by using the lower and upper values for the resource availability across the candidate Compute nodes: Nodes with the lowest availability of a resource (minval) are assigned '0'. Nodes with the highest availability of a resource (maxval) are assigned '1'. Nodes with resource availability within the minval - maxval range are assigned a normalized weight calculated by using the following formula: If all the Compute nodes have the same availability for a resource then they are all normalized to 0. For example, the scheduler calculates the normalized weights for available vCPUs across 10 Compute nodes, each with a different number of available vCPUs, as follows: Compute node 1 2 3 4 5 6 7 8 9 10 No of vCPUs 5 5 10 10 15 20 20 15 10 5 Normalized weight 0 0 0.33 0.33 0.67 1 1 0.67 0.33 0 The Compute scheduler uses the following formula to calculate the weight of a Compute node: The following table describes the available configuration options for weights. Note Weights can be set on host aggregates using the aggregate metadata key with the same name as the options detailed in the following table. If set on the host aggregate, the host aggregate value takes precedence. Table 7.2. Compute scheduler weights Configuration option Type Description filter_scheduler/weight_classes String Use this parameter to configure which of the following attributes to use for calculating the weight of each Compute node: nova.scheduler.weights.ram.RAMWeigher - Weighs the available RAM on the Compute node. nova.scheduler.weights.cpu.CPUWeigher - Weighs the available CPUs on the Compute node. nova.scheduler.weights.disk.DiskWeigher - Weighs the available disks on the Compute node. nova.scheduler.weights.metrics.MetricsWeigher - Weighs the metrics of the Compute node. nova.scheduler.weights.affinity.ServerGroupSoftAffinityWeigher - Weighs the proximity of the Compute node to other nodes in the given instance group. nova.scheduler.weights.affinity.ServerGroupSoftAntiAffinityWeigher - Weighs the proximity of the Compute node to other nodes in the given instance group. nova.scheduler.weights.compute.BuildFailureWeigher - Weighs Compute nodes by the number of recent failed boot attempts. nova.scheduler.weights.io_ops.IoOpsWeigher - Weighs Compute nodes by their workload. nova.scheduler.weights.pci.PCIWeigher - Weighs Compute nodes by their PCI availability. nova.scheduler.weights.cross_cell.CrossCellWeigher - Weighs Compute nodes based on which cell they are in, giving preference to Compute nodes in the source cell when moving an instance. nova.scheduler.weights.HypervisorVersionWeigher - Weighs Compute nodes based on the relative hypervisor version reported by the virt driver. nova.scheduler.weights.all_weighers - (Default) Uses all the above weighers. filter_scheduler/ram_weight_multiplier Floating point Use this parameter to specify the multiplier to use to weigh hosts based on the available RAM. Set to a positive value to prefer hosts with more available RAM, which spreads instances across many hosts. Set to a negative value to prefer hosts with less available RAM, which fills up (stacks) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the RAM weigher is relative to other weighers. Default: 1.0 - The scheduler spreads instances across all hosts evenly. filter_scheduler/disk_weight_multiplier Floating point Use this parameter to specify the multiplier to use to weigh hosts based on the available disk space. Set to a positive value to prefer hosts with more available disk space, which spreads instances across many hosts. Set to a negative value to prefer hosts with less available disk space, which fills up (stacks) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the disk weigher is relative to other weighers. Default: 1.0 - The scheduler spreads instances across all hosts evenly. filter_scheduler/cpu_weight_multiplier Floating point Use this parameter to specify the multiplier to use to weigh hosts based on the available vCPUs. Set to a positive value to prefer hosts with more available vCPUs, which spreads instances across many hosts. Set to a negative value to prefer hosts with less available vCPUs, which fills up (stacks) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the vCPU weigher is relative to other weighers. Default: 1.0 - The scheduler spreads instances across all hosts evenly. filter_scheduler/io_ops_weight_multiplier Floating point Use this parameter to specify the multiplier to use to weigh hosts based on the host workload. Set to a negative value to prefer hosts with lighter workloads, which distributes the workload across more hosts. Set to a positive value to prefer hosts with heavier workloads, which schedules instances onto hosts that are already busy. The absolute value, whether positive or negative, controls how strong the I/O operations weigher is relative to other weighers. Default: -1.0 - The scheduler distributes the workload across more hosts. filter_scheduler/build_failure_weight_multiplier Floating point Use this parameter to specify the multiplier to use to weigh hosts based on recent build failures. Set to a positive value to increase the significance of build failures recently reported by the host. Hosts with recent build failures are then less likely to be chosen. Set to 0 to disable weighing compute hosts by the number of recent failures. Default: 1000000.0 filter_scheduler/cross_cell_move_weight_multiplier Floating point Use this parameter to specify the multiplier to use to weigh hosts during a cross-cell move. This option determines how much weight is placed on a host which is within the same source cell when moving an instance. By default, the scheduler prefers hosts within the same source cell when migrating an instance. Set to a positive value to prefer hosts within the same cell the instance is currently running. Set to a negative value to prefer hosts located in a different cell from that where the instance is currently running. Default: 1000000.0 filter_scheduler/pci_weight_multiplier Positive floating point Use this parameter to specify the multiplier to use to weigh hosts based on the number of PCI devices on the host and the number of PCI devices requested by an instance. If an instance requests PCI devices, then the more PCI devices a Compute node has the higher the weight allocated to the Compute node. For example, if there are three hosts available, one with a single PCI device, one with multiple PCI devices and one without any PCI devices, then the Compute scheduler prioritizes these hosts based on the demands of the instance. The scheduler should prefer the first host if the instance requests one PCI device, the second host if the instance requires multiple PCI devices and the third host if the instance does not request a PCI device. Configure this option to prevent non-PCI instances from occupying resources on hosts with PCI devices. Default: 1.0 filter_scheduler/host_subset_size Integer Use this parameter to specify the size of the subset of filtered hosts from which to select the host. You must set this option to at least 1. A value of 1 selects the first host returned by the weighing functions. The scheduler ignores any value less than 1 and uses 1 instead. Set to a value greater than 1 to prevent multiple scheduler processes handling similar requests selecting the same host, creating a potential race condition. By selecting a host randomly from the N hosts that best fit the request, the chance of a conflict is reduced. However, the higher you set this value, the less optimal the chosen host may be for a given request. Default: 1 filter_scheduler/soft_affinity_weight_multiplier Positive floating point Use this parameter to specify the multiplier to use to weigh hosts for group soft-affinity. Note You need to specify the microversion when creating a group with this policy: Default: 1.0 filter_scheduler/soft_anti_affinity_weight_multiplier Positive floating point Use this parameter to specify the multiplier to use to weigh hosts for group soft-anti-affinity. Note You need to specify the microversion when creating a group with this policy: Default: 1.0 filter_scheduler/hypervisor_version_weight_multiplier Floating point Use this parameter to specify the multiplier to use to weigh hosts based on the hypervisor version reported by the host's virt driver. Set to a negative integer or float value to prefer Compute hosts with older hypervisors. Set to 0 to disable weighing Compute hosts by the hypervisor version. Default: 1.0 - The scheduler prefers Compute hosts with newer hypervisors. metrics/weight_multiplier Floating point Use this parameter to specify the multiplier to use for weighting metrics. By default, weight_multiplier=1.0 , which spreads instances across possible hosts. Set to a number greater than 1.0 to increase the effect of the metric on the overall weight. Set to a number between 0.0 and 1.0 to reduce the effect of the metric on the overall weight. Set to 0.0 to ignore the metric value and return the value of the weight_of_unavailable option. Set to a negative number to prioritize the host with lower metrics, and stack instances in hosts. Default: 1.0 metrics/weight_setting Comma-separated list of metric=ratio pairs Use this parameter to specify the metrics to use for weighting, and the ratio to use to calculate the weight of each metric. Valid metric names: cpu.frequency - CPU frequency cpu.user.time - CPU user mode time cpu.kernel.time - CPU kernel time cpu.idle.time - CPU idle time cpu.iowait.time - CPU I/O wait time cpu.user.percent - CPU user mode percentage cpu.kernel.percent - CPU kernel percentage cpu.idle.percent - CPU idle percentage cpu.iowait.percent - CPU I/O wait percentage cpu.percent - Generic CPU use Example: weight_setting=cpu.user.time=1.0 metrics/required Boolean Use this parameter to specify how to handle configured metrics/weight_setting metrics that are unavailable: True - Metrics are required. If the metric is unavailable, an exception is raised. To avoid the exception, use the MetricsFilter filter in NovaSchedulerEnabledFilters . False - The unavailable metric is treated as a negative factor in the weighing process. Set the returned value by using the weight_of_unavailable configuration option. metrics/weight_of_unavailable Floating point Use this parameter to specify the weight to use if any metrics/weight_setting metric is unavailable, and metrics/required=False . Default: -10000.0 7.5. Declaring custom traits and resource classes As an administrator, you can declare which custom physical features and consumable resources are available on data plane nodes by defining a custom inventory of resources in a YAML file, provider.yaml . You can declare the availability of physical host features by defining custom traits, such as CUSTOM_DIESEL_BACKUP_POWER , CUSTOM_FIPS_COMPLIANT , and CUSTOM_HPC_OPTIMIZED . You can also declare the availability of consumable resources by defining resource classes, such as CUSTOM_DISK_IOPS , and CUSTOM_POWER_WATTS . Note You can use flavor metadata to request custom resources and custom traits. For more information, see Instance bare-metal resource class and Instance resource traits . Prerequisites You installed the oc and podman command line tools on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. Procedure Create a file in /home/stack/templates/ called provider.yaml . To configure the resource provider, add the following configuration to your provider.yaml file: Replace <node_uuid> with the UUID for the node, for example, '5213b75d-9260-42a6-b236-f39b0fd10561' . Alternatively, you can use the name property to identify the resource provider: name: 'EXAMPLE_RESOURCE_PROVIDER' . To configure the available custom resource classes for the resource provider, add the following configuration to your provider.yaml file: Replace CUSTOM_EXAMPLE_RESOURCE_CLASS with the name of the resource class. Custom resource classes must begin with the prefix CUSTOM_ and contain only the letters A through Z, the numbers 0 through 9 and the underscore "_" character. Replace <total_available> with the number of available CUSTOM_EXAMPLE_RESOURCE_CLASS for this resource provider. Replace <reserved> with the number of available CUSTOM_EXAMPLE_RESOURCE_CLASS for this resource provider. Replace <min_unit> with the minimum units of resources a single instance can consume. Replace <max_unit> with the maximum units of resources a single instance can consume. Replace <step_size> with the number of available CUSTOM_EXAMPLE_RESOURCE_CLASS for this resource provider. Replace <allocation_ratio> with the value to set the allocation ratio. If allocation_ratio is set to 1.0, then no overallocation is allowed. But if allocation_ration is greater than 1.0, then the total available resource is more than the physically existing one. To configure the available traits for the resource provider, add the following configuration to your provider.yaml file: Replace CUSTOM_EXAMPLE_TRAIT with the name of the trait. Custom traits must begin with the prefix CUSTOM_ and contain only the letters A through Z, the numbers 0 through 9 and the underscore "_" character. Example provider.yaml file The following example declares one custom resource class and one custom trait for a resource provider. 1 This hypervisor has 22 units of last level cache (LLC). 2 Two of the units of LLC are reserved for the host. 3 4 The min_unit and max_unit values define how many units of resources a single VM can consume. 5 The step size defines the increments of consumption. 6 The allocation ratio configures the overallocation of resources. Save and close the provider.yaml file. Create a ConfigMap CR that configures the Compute nodes to use the provider.yaml file for the declaration of the custom traits and resources, and save it to a file named compute-provider.yaml on your workstation: For more information about creating ConfigMap objects, see Creating and using config maps. Create the ConfigMap object: Create a new custom service, compute-provider , that includes the compute-provider ConfigMap object, and save it to a file named compute-provider-service.yaml on your workstation: Create the compute-provider service: Create a new OpenStackDataPlaneNodeSet CR that defines the nodes that you want to use the provider.yaml file for the declaration of the custom traits and resources, and save it to a file named compute-provider.yaml on your workstation: For information about how to create an OpenStackDataPlaneNodeSet CR, see Creating a set of data plane nodes. Modify your compute-provider OpenStackDataPlaneNodeSet CR to use your compute-provider-service service instead of the default Compute service: Save the compute-provider.yaml OpenStackDataPlaneNodeSet CR definition file. Create the data plane resources: Verify the data plane resources have been created: Verify the services were created: Create a new OpenStackDataPlaneDeployment CR to configure the services on the data plane nodes and deploy the nodes, and save it to a file named compute-provider_deploy.yaml on your workstation: For information about how to create an OpenStackDataPlaneDeployment CR, see Deploying the data plane. Specify nodeSets to include all the OpenStackDataPlaneNodeSet CRs that you want to deploy: Replace <nodeSet_name> with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment. Save the compute-provider_deploy.yaml deployment file. Deploy the data plane: Verify that the data plane is deployed: Ensure that the deployed Compute nodes are visible on the control plane: Access the remote shell for openstackclient and verify that the deployed Compute nodes are visible on the control plane: 7.6. Creating and managing host aggregates As a cloud administrator, you can partition a Compute deployment into logical groups for performance or administrative purposes. Red Hat OpenStack Services on OpenShift (RHOSO) provides the following mechanisms for partitioning logical groups: Host aggregate A host aggregate is a grouping of Compute nodes into a logical unit based on attributes such as the hardware or performance characteristics. You can assign a Compute node to one or more host aggregates. You can map flavors and images to host aggregates by setting metadata on the host aggregate, and then matching flavor extra specs or image metadata properties to the host aggregate metadata. The Compute scheduler can use this metadata to schedule instances when the required filters are enabled. Metadata that you specify in a host aggregate limits the use of that host to any instance that has the same metadata specified in its flavor or image. You can configure weight multipliers for each host aggregate by setting the xxx_weight_multiplier configuration option in the host aggregate metadata. You can use host aggregates to handle load balancing, enforce physical isolation or redundancy, group servers with common attributes, or separate classes of hardware. When you create a host aggregate, you can specify a zone name. This name is presented to cloud users as an availability zone that they can select. Availability zones An availability zone is the cloud user view of a host aggregate. A cloud user cannot view the Compute nodes in the availability zone, or view the metadata of the availability zone. The cloud user can only see the name of the availability zone. You can assign each Compute node to only one availability zone. You can configure a default availability zone where instances will be scheduled when the cloud user does not specify a zone. You can direct cloud users to use availability zones that have specific capabilities. 7.6.1. Enabling scheduling on host aggregates To schedule instances on host aggregates that have specific attributes, update the configuration of the Compute scheduler to enable filtering based on the host aggregate metadata. Prerequisites You installed oc and podman command line tools on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Change to the cloud-admin home directory: Open your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml , on your workstation. Add the following values to the enabled_filters parameter, if they are not already present: AggregateInstanceExtraSpecsFilter : Add this value to filter Compute nodes by host aggregate metadata that match flavor extra specs. Note For this filter to perform as expected, you must scope the flavor extra specs by prefixing the extra_specs key with the aggregate_instance_extra_specs: namespace. AggregateImagePropertiesIsolation : Add this value to filter Compute nodes by host aggregate metadata that match image metadata properties. Note To filter host aggregate metadata by using image metadata properties, the host aggregate metadata key must match a valid image metadata property. For information about valid image metadata properties, see Image configuration parameters . AvailabilityZoneFilter : Add this value to filter by availability zone when launching an instance. Note Instead of using the AvailabilityZoneFilter Compute scheduler service filter, you can use the Placement service to process availability zone requests. Update the control plane: Exit the openstackclient pod: 7.6.2. Creating a host aggregate As a cloud administrator, you can create as many host aggregates as you require. Prerequisites You have the oc and podman command line tools installed on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Change to the cloud-admin home directory: To create a host aggregate, enter the following command: Replace <aggregate_name> with the name you want to assign to the host aggregate. Add metadata to the host aggregate: Replace <key=value> with the metadata key-value pair. If you are using the AggregateInstanceExtraSpecsFilter filter, the key can be any arbitrary string, for example, ssd=true . If you are using the AggregateImagePropertiesIsolation filter, the key must match a valid image metadata property. For more information about valid image metadata properties, see Image configuration parameters . Replace <aggregate_name> with the name of the host aggregate. Add the Compute nodes to the host aggregate: Replace <aggregate_name> with the name of the host aggregate to add the Compute node to. Replace <host_name> with the name of the Compute node to add to the host aggregate. Create a flavor or image for the host aggregate: Create a flavor: Create an image: Set one or more key-value pairs on the flavor or image that match the key-value pairs on the host aggregate. To set the key-value pairs on a flavor, use the scope aggregate_instance_extra_specs : To set the key-value pairs on an image, use valid image metadata properties as the key: Exit the openstackclient pod: 7.6.3. Creating an availability zone As a cloud administrator, you can create an availability zone that cloud users can select when they create an instance. Prerequisites You installed the oc and podman command line tools on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Change to the cloud-admin home directory: To create an availability zone, you can create a new availability zone host aggregate, or make an existing host aggregate an availability zone: To create a new availability zone host aggregate, enter the following command: Replace <availability_zone> with the name you want to assign to the availability zone. Replace <aggregate_name> with the name you want to assign to the host aggregate. To make an existing host aggregate an availability zone, enter the following command: Replace <availability_zone> with the name you want to assign to the availability zone. Replace <aggregate_name> with the name of the host aggregate. Optional: Add metadata to the availability zone: Replace <key=value> with your metadata key-value pair. You can add as many key-value properties as required. Replace <aggregate_name> with the name of the availability zone host aggregate. Add Compute nodes to the availability zone host aggregate: Replace <aggregate_name> with the name of the availability zone host aggregate to add the Compute node to. Replace <host_name> with the name of the Compute node to add to the availability zone. Exit the openstackclient pod: 7.6.4. Deleting a host aggregate To delete a host aggregate, you first remove all the Compute nodes from the host aggregate. Prerequisites You installed oc and podman command line tools on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Change to the cloud-admin home directory: To view a list of all the Compute nodes assigned to the host aggregate, enter the following command: To remove all assigned Compute nodes from the host aggregate, enter the following command for each Compute node: Replace <aggregate_name> with the name of the host aggregate to remove the Compute node from. Replace <host_name> with the name of the Compute node to remove from the host aggregate. After you remove all the Compute nodes from the host aggregate, enter the following command to delete the host aggregate: Exit the openstackclient pod: 7.6.5. Creating a project-isolated host aggregate You can create a host aggregate that is available only to specific projects. Only the projects that you assign to the host aggregate can launch instances on the host aggregate. Note Project isolation uses the Placement service to filter host aggregates for each project. This process supersedes the functionality of the AggregateMultiTenancyIsolation filter. You therefore do not need to use the AggregateMultiTenancyIsolation filter. Prerequisites You installed the oc and podman command line tools on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Change to the cloud-admin home directory: On your workstation, open your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml . To schedule project instances on the project-isolated host aggregate, set the value of the query_placement_for_image_type_support parameter to True : Optional: To ensure that only the projects that you assign to a host aggregate can create instances on your cloud, set the value of the placement_aggregate_required_for_tenants parameter to True . Note The parameter placement_aggregate_required_for_tenants is set to False by default. When this parameter is False , projects that are not assigned to a host aggregate can create instances on any host aggregate. Save the updates to your Compute environment file. Update the control plane: Create the host aggregate. Retrieve the list of project IDs: Use the filter_tenant_id<suffix> metadata key to assign projects to the host aggregate: Replace <ID0> , <ID1> , and all IDs up to <IDn> with unique values for each project filter that you want to create. Replace <project_id0> , <project_id1> , and all project IDs up to <project_idn> with the ID of each project that you want to assign to the host aggregate. Replace <aggregate_name> with the name of the project-isolated host aggregate. For example, use the following syntax to assign projects 78f1 , 9d3t , and aa29 to the host aggregate project-isolated-aggregate : Tip You can create a host aggregate that is available only to a single specific project by omitting the suffix from the filter_tenant_id metadata key: Exit the openstackclient pod: Additional resources For more information on creating a host aggregate, see Creating and managing host aggregates .
[ "apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: nova: template: schedulerServiceTemplate: customServiceConfig: | [scheduler] query_placement_for_image_type_support = true", "oc apply -f openstack_control_plane.yaml -n openstack", "oc get openstackcontrolplane -n openstack", "oc rsh -n openstack openstackclient", "cd /home/cloud-admin", "openstack image create ... trait-image", "openstack --os-placement-api-version 1.6 trait list", "openstack --os-placement-api-version 1.6 trait create CUSTOM_TRAIT_NAME", "existing_traits=USD(openstack --os-placement-api-version 1.6 resource provider trait list -f value <host_uuid> | sed 's/^/--trait /')", "echo USDexisting_traits", "openstack --os-placement-api-version 1.6 resource provider trait set USDexisting_traits --trait <TRAIT_NAME> <host_uuid>", "openstack image set --property trait:HW_CPU_X86_AVX512BW=required trait-image", "openstack image set --property trait:COMPUTE_VOLUME_MULTI_ATTACH=forbidden trait-image", "exit", "oc rsh -n openstack openstackclient", "cd /home/cloud-admin", "openstack flavor create --vcpus 1 --ram 512 --disk 2 trait-flavor", "openstack --os-placement-api-version 1.6 trait list", "openstack --os-placement-api-version 1.6 trait create CUSTOM_TRAIT_NAME", "existing_traits=USD(openstack --os-placement-api-version 1.6 resource provider trait list -f value <host_uuid> | sed 's/^/--trait /')", "echo USDexisting_traits", "openstack --os-placement-api-version 1.6 resource provider trait set USDexisting_traits --trait <TRAIT_NAME> <host_uuid>", "openstack flavor set --property trait:HW_CPU_X86_AVX512BW=required trait-flavor", "openstack flavor set --property trait:COMPUTE_VOLUME_MULTI_ATTACH=forbidden trait-flavor", "exit", "oc rsh -n openstack openstackclient", "cd /home/cloud-admin", "openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml", "openstack --os-placement-api-version 1.6 trait list", "openstack --os-placement-api-version 1.6 trait create CUSTOM_TRAIT_NAME", "existing_traits=USD(openstack --os-placement-api-version 1.6 resource provider trait list -f value <host_uuid> | sed 's/^/--trait /')", "echo USDexisting_traits", "openstack --os-placement-api-version 1.6 resource provider trait set USDexisting_traits --trait <TRAIT_NAME> <host_uuid>", "openstack --os-compute-api-version 2.53 aggregate set --property trait:<TRAIT_NAME>=required <aggregate_name>", "openstack --os-compute-api=2.86 flavor set --property trait:<TRAIT_NAME>=required <flavor> openstack image set --property trait:<TRAIT_NAME>=required <image>", "exit", "spec: nova: template: schedulerServiceTemplate: customServiceConfig: | [filter_scheduler] enabled_filters = AggregateInstanceExtraSpecsFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter", "spec: nova: template: schedulerServiceTemplate: customServiceConfig: | [filter_scheduler] weight_classes = nova.scheduler.weights.all_weighers", "spec: nova: template: schedulerServiceTemplate: customServiceConfig: | [filter_scheduler] weight_classes = nova.scheduler.weights.all_weighers [filter_scheduler] ram_weight_multiplier = 2.0", "oc apply -f openstack_control_plane.yaml -n openstack", "oc get openstackcontrolplane -n openstack", "oc get pods -n openstack", "openstack server create --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint different_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 --hint different_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1", "parameter_defaults: ComputeExtraConfig: nova::config::nova_config: filter_scheduler/isolated_hosts: value: server1, server2 filter_scheduler/isolated_images: value: 342b492c-128f-4a42-8d3a-c5088cf27d13, ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09", "parameter_defaults: ComputeExtraConfig: nova::config::nova_config: DEFAULT/compute_monitors: value: 'cpu.virt_driver'", "openstack server create --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint same_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 --hint same_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1", "openstack server group create --policy affinity <group_name>", "openstack server create --image <image> --flavor <flavor> --hint group=<group_uuid> <instance_name>", "openstack server group create --policy anti-affinity <group_name>", "openstack server create --image <image> --flavor <flavor> --hint group=<group_uuid> <instance_name>", "openstack server create --image <image> --flavor <flavor> --hint build_near_host_ip=<ip_address> --hint cidr=<subnet_mask> <instance_name>", "(node_resource_availability - minval) / (maxval - minval)", "(w1_multiplier * norm(w1)) + (w2_multiplier * norm(w2)) +", "openstack --os-compute-api-version 2.15 server group create --policy soft-affinity <group_name>", "openstack --os-compute-api-version 2.15 server group create --policy soft-affinity <group_name>", "meta: schema_version: '1.0' providers: - identification: uuid: <node_uuid>", "meta: schema_version: '1.0' providers: - identification: uuid: <node_uuid> inventories: additional: - CUSTOM_EXAMPLE_RESOURCE_CLASS: total: <total_available> reserved: <reserved> min_unit: <min_unit> max_unit: <max_unit> step_size: <step_size> allocation_ratio: <allocation_ratio>", "meta: schema_version: '1.0' providers: - identification: uuid: <node_uuid> inventories: additional: traits: additional: - 'CUSTOM_EXAMPLE_TRAIT'", "meta: schema_version: 1.0 providers: - identification: uuid: USDCOMPUTE_NODE inventories: additional: CUSTOM_LLC: # Describing LLC on this compute node # max_unit indicates maximum size of single LLC # total indicates sum of sizes of all LLC total: 22 1 reserved: 2 2 min_unit: 1 3 max_unit: 11 4 step_size: 1 5 allocation_ratio: 1.0 6 traits: additional: # Describing that this compute node enables support for # P-state control - CUSTOM_P_STATE_ENABLED", "apiVersion: v1 kind: ConfigMap metadata: name: compute-provider namespace: openstack data: provider.yaml: |", "oc create -f compute-provider.yaml", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService name: compute-provider namespace: openstack spec: label: dataplane-deployment-compute playbook: osp.edpm.nova secrets: [] dataSources: - secretRef: name: nova-cell1-compute-config - secretRef: name: nova-migration-ssh-key - configMapRef: name: compute-provider - configMapRef: name: nova-extra-config optional: true", "oc apply -f compute-provider-service.yaml", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: compute-provider", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: compute-provider spec: services: - download-cache - configure-network - validate-network - install-os - configure-os - run-os - ovn - libvirt - compute-provider-service #replaced the nova service - telemetry", "oc create -f compute-provider.yaml", "oc get openstackdataplanenodeset NAME STATUS MESSAGE compute-provider False Deployment not started", "oc get openstackdataplaneservice NAME AGE download-cache 6d7h configure-network 6d7h configure-os 6d6h install-os 6d6h run-os 6d6h validate-network 6d6h ovn 6d6h libvirt 6d6h compute-provider 6d6h telemetry 6d6h", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: compute-provider", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: compute-provider spec: nodeSets: - openstack-edpm - compute-provider - - <nodeSet_name>", "oc create -f compute-provider_deploy.yaml", "oc get openstackdataplanedeployment NAME STATUS MESSAGE compute-provider True Deployment Completed oc get openstackdataplanenodeset NAME STATUS MESSAGE openstack-edpm True Deployed compute-provider True Deployed", "oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose", "oc rsh -n openstack openstackclient openstack hypervisor list", "oc rsh -n openstack openstackclient", "cd /home/cloud-admin", "oc apply -f openstack_control_plane.yaml -n openstack", "exit", "oc rsh -n openstack openstackclient", "cd /home/cloud-admin", "openstack aggregate create <aggregate_name>", "openstack aggregate set --property <key=value> --property <key=value> <aggregate_name>", "openstack aggregate add host <aggregate_name> <host_name>", "openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <no_reserved_vcpus> host-agg-flavor", "openstack image create host-agg-image", "openstack flavor set --property aggregate_instance_extra_specs:ssd=true host-agg-flavor", "openstack image set --property os_type=linux host-agg-image", "exit", "oc rsh -n openstack openstackclient", "cd /home/cloud-admin", "openstack aggregate create --zone <availability_zone> <aggregate_name>", "openstack aggregate set --zone <availability_zone> <aggregate_name>", "openstack aggregate set --property <key=value> <aggregate_name>", "openstack aggregate add host <aggregate_name> <host_name>", "exit", "oc rsh -n openstack openstackclient", "cd /home/cloud-admin", "openstack aggregate show <aggregate_name>", "openstack aggregate remove host <aggregate_name> <host_name>", "openstack aggregate delete <aggregate_name>", "exit", "oc rsh -n openstack openstackclient", "cd /home/cloud-admin", "[scheduler] query_placement_for_image_type_support = True", "oc apply -f openstack_control_plane.yaml -n openstack", "openstack project list", "openstack aggregate set --property filter_tenant_id<ID0>=<project_id0> --property filter_tenant_id<ID1>=<project_id1> --property filter_tenant_id<IDn>=<project_idn> <aggregate_name>", "openstack aggregate set --property filter_tenant_id0=78f1 --property filter_tenant_id1=9d3t --property filter_tenant_id2=aa29 project-isolated-aggregate", "openstack aggregate set --property filter_tenant_id=78f1 single-project-isolated-aggregate", "exit" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-instance-scheduling-and-placement_memory
Chapter 2. Service Mesh 1.x
Chapter 2. Service Mesh 1.x 2.1. Service Mesh Release Notes Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . 2.1.1. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 2.1.2. Introduction to Red Hat OpenShift Service Mesh Red Hat OpenShift Service Mesh addresses a variety of problems in a microservice architecture by creating a centralized point of control in an application. It adds a transparent layer on existing distributed applications without requiring any changes to the application code. Microservice architectures split the work of enterprise applications into modular services, which can make scaling and maintenance easier. However, as an enterprise application built on a microservice architecture grows in size and complexity, it becomes difficult to understand and manage. Service Mesh can address those architecture problems by capturing or intercepting traffic between services and can modify, redirect, or create new requests to other services. Service Mesh, which is based on the open source Istio project , provides an easy way to create a network of deployed services that provides discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring. A service mesh also provides more complex operational functionality, including A/B testing, canary releases, access control, and end-to-end authentication. 2.1.3. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. The must-gather tool enables you to collect diagnostic information about your OpenShift Container Platform cluster, including virtual machines and other data related to Red Hat OpenShift Service Mesh. For prompt support, supply diagnostic information for both OpenShift Container Platform and Red Hat OpenShift Service Mesh. 2.1.3.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, including: Resource definitions Service logs By default, the oc adm must-gather command uses the default plug-in image and writes into ./must-gather.local . Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections: To collect data related to one or more specific features, use the --image argument with an image, as listed in a following section. For example: USD oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.0 To collect the audit logs, use the -- /usr/bin/gather_audit_logs argument, as described in a following section. For example: USD oc adm must-gather -- /usr/bin/gather_audit_logs Note Audit logs are not collected as part of the default set of information to reduce the size of the files. When you run oc adm must-gather , a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local . This directory is created in the current working directory. For example: NAMESPACE NAME READY STATUS RESTARTS AGE ... openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s ... 2.1.3.2. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift Container Platform CLI ( oc ) installed. 2.1.3.3. About collecting service mesh data You can use the oc adm must-gather CLI command to collect information about your cluster, including features and objects associated with Red Hat OpenShift Service Mesh. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift Container Platform CLI ( oc ) installed. Precedure To collect Red Hat OpenShift Service Mesh data with must-gather , you must specify the Red Hat OpenShift Service Mesh image. USD oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8 To collect Red Hat OpenShift Service Mesh data for a specific Service Mesh control plane namespace with must-gather , you must specify the Red Hat OpenShift Service Mesh image and namespace. In this example, replace <namespace> with your Service Mesh control plane namespace, such as istio-system . USD oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8 gather <namespace> 2.1.4. Red Hat OpenShift Service Mesh supported configurations The following are the only supported configurations for the Red Hat OpenShift Service Mesh: OpenShift Container Platform version 4.6 or later. Note OpenShift Online and Red Hat OpenShift Dedicated are not supported for Red Hat OpenShift Service Mesh. The deployment must be contained within a single OpenShift Container Platform cluster that is not federated. This release of Red Hat OpenShift Service Mesh is only available on OpenShift Container Platform x86_64. This release only supports configurations where all Service Mesh components are contained in the OpenShift Container Platform cluster in which it operates. It does not support management of microservices that reside outside of the cluster, or in a multi-cluster scenario. This release only supports configurations that do not integrate external services such as virtual machines. For additional information about Red Hat OpenShift Service Mesh lifecycle and supported configurations, refer to the Support Policy . 2.1.4.1. Supported configurations for Kiali on Red Hat OpenShift Service Mesh The Kiali observability console is only supported on the two most recent releases of the Chrome, Edge, Firefox, or Safari browsers. 2.1.4.2. Supported Mixer adapters This release only supports the following Mixer adapter: 3scale Istio Adapter 2.1.5. New Features Red Hat OpenShift Service Mesh provides a number of key capabilities uniformly across a network of services: Traffic Management - Control the flow of traffic and API calls between services, make calls more reliable, and make the network more robust in the face of adverse conditions. Service Identity and Security - Provide services in the mesh with a verifiable identity and provide the ability to protect service traffic as it flows over networks of varying degrees of trustworthiness. Policy Enforcement - Apply organizational policy to the interaction between services, ensure access policies are enforced and resources are fairly distributed among consumers. Policy changes are made by configuring the mesh, not by changing application code. Telemetry - Gain understanding of the dependencies between services and the nature and flow of traffic between them, providing the ability to quickly identify issues. 2.1.5.1. New features Red Hat OpenShift Service Mesh 1.1.18.2 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs). 2.1.5.1.1. Component versions included in Red Hat OpenShift Service Mesh version 1.1.18.2 Component Version Istio 1.4.10 Jaeger 1.30.2 Kiali 1.12.21.1 3scale Istio Adapter 1.0.0 2.1.5.2. New features Red Hat OpenShift Service Mesh 1.1.18.1 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs). 2.1.5.2.1. Component versions included in Red Hat OpenShift Service Mesh version 1.1.18.1 Component Version Istio 1.4.10 Jaeger 1.30.2 Kiali 1.12.20.1 3scale Istio Adapter 1.0.0 2.1.5.3. New features Red Hat OpenShift Service Mesh 1.1.18 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs). 2.1.5.3.1. Component versions included in Red Hat OpenShift Service Mesh version 1.1.18 Component Version Istio 1.4.10 Jaeger 1.24.0 Kiali 1.12.18 3scale Istio Adapter 1.0.0 2.1.5.4. New features Red Hat OpenShift Service Mesh 1.1.17.1 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs). 2.1.5.4.1. Change in how Red Hat OpenShift Service Mesh handles URI fragments Red Hat OpenShift Service Mesh contains a remotely exploitable vulnerability, CVE-2021-39156 , where an HTTP request with a fragment (a section in the end of a URI that begins with a # character) in the URI path could bypass the Istio URI path-based authorization policies. For instance, an Istio authorization policy denies requests sent to the URI path /user/profile . In the vulnerable versions, a request with URI path /user/profile#section1 bypasses the deny policy and routes to the backend (with the normalized URI path /user/profile%23section1 ), possibly leading to a security incident. You are impacted by this vulnerability if you use authorization policies with DENY actions and operation.paths , or ALLOW actions and operation.notPaths . With the mitigation, the fragment part of the request's URI is removed before the authorization and routing. This prevents a request with a fragment in its URI from bypassing authorization policies which are based on the URI without the fragment part. 2.1.5.4.2. Required update for authorization policies Istio generates hostnames for both the hostname itself and all matching ports. For instance, a virtual service or Gateway for a host of "httpbin.foo" generates a config matching "httpbin.foo and httpbin.foo:*". However, exact match authorization policies only match the exact string given for the hosts or notHosts fields. Your cluster is impacted if you have AuthorizationPolicy resources using exact string comparison for the rule to determine hosts or notHosts . You must update your authorization policy rules to use prefix match instead of exact match. For example, replacing hosts: ["httpbin.com"] with hosts: ["httpbin.com:*"] in the first AuthorizationPolicy example. First example AuthorizationPolicy using prefix match apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: ["dev"] to: - operation: hosts: ["httpbin.com","httpbin.com:*"] Second example AuthorizationPolicy using prefix match apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: default spec: action: DENY rules: - to: - operation: hosts: ["httpbin.example.com:*"] 2.1.5.5. New features Red Hat OpenShift Service Mesh 1.1.17 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.1.5.6. New features Red Hat OpenShift Service Mesh 1.1.16 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.1.5.7. New features Red Hat OpenShift Service Mesh 1.1.15 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.1.5.8. New features Red Hat OpenShift Service Mesh 1.1.14 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. Important There are manual steps that must be completed to address CVE-2021-29492 and CVE-2021-31920. 2.1.5.8.1. Manual updates required by CVE-2021-29492 and CVE-2021-31920 Istio contains a remotely exploitable vulnerability where an HTTP request path with multiple slashes or escaped slash characters ( %2F` or %5C`) could potentially bypass an Istio authorization policy when path-based authorization rules are used. For example, assume an Istio cluster administrator defines an authorization DENY policy to reject the request at path /admin . A request sent to the URL path //admin will NOT be rejected by the authorization policy. According to RFC 3986 , the path //admin with multiple slashes should technically be treated as a different path from the /admin . However, some backend services choose to normalize the URL paths by merging multiple slashes into a single slash. This can result in a bypass of the authorization policy ( //admin does not match /admin ), and a user can access the resource at path /admin in the backend; this would represent a security incident. Your cluster is impacted by this vulnerability if you have authorization policies using ALLOW action + notPaths field or DENY action + paths field patterns. These patterns are vulnerable to unexpected policy bypasses. Your cluster is NOT impacted by this vulnerability if: You don't have authorization policies. Your authorization policies don't define paths or notPaths fields. Your authorization policies use ALLOW action + paths field or DENY action + notPaths field patterns. These patterns could only cause unexpected rejection instead of policy bypasses. The upgrade is optional for these cases. Note The Red Hat OpenShift Service Mesh configuration location for path normalization is different from the Istio configuration. 2.1.5.8.2. Updating the path normalization configuration Istio authorization policies can be based on the URL paths in the HTTP request. Path normalization , also known as URI normalization, modifies and standardizes the incoming requests' paths so that the normalized paths can be processed in a standard way. Syntactically different paths may be equivalent after path normalization. Istio supports the following normalization schemes on the request paths before evaluating against the authorization policies and routing the requests: Table 2.1. Normalization schemes Option Description Example Notes NONE No normalization is done. Anything received by Envoy will be forwarded exactly as-is to any backend service. ../%2Fa../b is evaluated by the authorization policies and sent to your service. This setting is vulnerable to CVE-2021-31920. BASE This is currently the option used in the default installation of Istio. This applies the normalize_path option on Envoy proxies, which follows RFC 3986 with extra normalization to convert backslashes to forward slashes. /a/../b is normalized to /b . \da is normalized to /da . This setting is vulnerable to CVE-2021-31920. MERGE_SLASHES Slashes are merged after the BASE normalization. /a//b is normalized to /a/b . Update to this setting to mitigate CVE-2021-31920. DECODE_AND_MERGE_SLASHES The strictest setting when you allow all traffic by default. This setting is recommended, with the caveat that you must thoroughly test your authorization policies routes. Percent-encoded slash and backslash characters ( %2F , %2f , %5C and %5c ) are decoded to / or \ , before the MERGE_SLASHES normalization. /a%2fb is normalized to /a/b . Update to this setting to mitigate CVE-2021-31920. This setting is more secure, but also has the potential to break applications. Test your applications before deploying to production. The normalization algorithms are conducted in the following order: Percent-decode %2F , %2f , %5C and %5c . The RFC 3986 and other normalization implemented by the normalize_path option in Envoy. Merge slashes. Warning While these normalization options represent recommendations from HTTP standards and common industry practices, applications may interpret a URL in any way it chooses to. When using denial policies, ensure that you understand how your application behaves. 2.1.5.8.3. Path normalization configuration examples Ensuring Envoy normalizes request paths to match your backend services' expectations is critical to the security of your system. The following examples can be used as a reference for you to configure your system. The normalized URL paths, or the original URL paths if NONE is selected, will be: Used to check against the authorization policies. Forwarded to the backend application. Table 2.2. Configuration examples If your application... Choose... Relies on the proxy to do normalization BASE , MERGE_SLASHES or DECODE_AND_MERGE_SLASHES Normalizes request paths based on RFC 3986 and does not merge slashes. BASE Normalizes request paths based on RFC 3986 and merges slashes, but does not decode percent-encoded slashes. MERGE_SLASHES Normalizes request paths based on RFC 3986 , decodes percent-encoded slashes, and merges slashes. DECODE_AND_MERGE_SLASHES Processes request paths in a way that is incompatible with RFC 3986 . NONE 2.1.5.8.4. Configuring your SMCP for path normalization To configure path normalization for Red Hat OpenShift Service Mesh, specify the following in your ServiceMeshControlPlane . Use the configuration examples to help determine the settings for your system. SMCP v1 pathNormalization spec: global: pathNormalization: <option> 2.1.5.9. New features Red Hat OpenShift Service Mesh 1.1.13 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.1.5.10. New features Red Hat OpenShift Service Mesh 1.1.12 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.1.5.11. New features Red Hat OpenShift Service Mesh 1.1.11 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.1.5.12. New features Red Hat OpenShift Service Mesh 1.1.10 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.1.5.13. New features Red Hat OpenShift Service Mesh 1.1.9 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.1.5.14. New features Red Hat OpenShift Service Mesh 1.1.8 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.1.5.15. New features Red Hat OpenShift Service Mesh 1.1.7 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.1.5.16. New features Red Hat OpenShift Service Mesh 1.1.6 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.1.5.17. New features Red Hat OpenShift Service Mesh 1.1.5 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. This release also added support for configuring cipher suites. 2.1.5.18. New features Red Hat OpenShift Service Mesh 1.1.4 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. Note There are manual steps that must be completed to address CVE-2020-8663. 2.1.5.18.1. Manual updates required by CVE-2020-8663 The fix for CVE-2020-8663 : envoy: Resource exhaustion when accepting too many connections added a configurable limit on downstream connections. The configuration option for this limit must be configured to mitigate this vulnerability. Important These manual steps are required to mitigate this CVE whether you are using the 1.1 version or the 1.0 version of Red Hat OpenShift Service Mesh. This new configuration option is called overload.global_downstream_max_connections , and it is configurable as a proxy runtime setting. Perform the following steps to configure limits at the Ingress Gateway. Procedure Create a file named bootstrap-override.json with the following text to force the proxy to override the bootstrap template and load runtime configuration from disk: Create a secret from the bootstrap-override.json file, replacing <SMCPnamespace> with the namespace where you created the service mesh control plane (SMCP): USD oc create secret generic -n <SMCPnamespace> gateway-bootstrap --from-file=bootstrap-override.json Update the SMCP configuration to activate the override. Updated SMCP configuration example #1 apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap To set the new configuration option, create a secret that has the desired value for the overload.global_downstream_max_connections setting. The following example uses a value of 10000 : USD oc create secret generic -n <SMCPnamespace> gateway-settings --from-literal=overload.global_downstream_max_connections=10000 Update the SMCP again to mount the secret in the location where Envoy is looking for runtime configuration: Updated SMCP configuration example #2 apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: template: default #Change the version to "v1.0" if you are on the 1.0 stream. version: v1.1 istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap # below is the new secret mount - mountPath: /var/lib/istio/envoy/runtime name: gateway-settings secretName: gateway-settings 2.1.5.18.2. Upgrading from Elasticsearch 5 to Elasticsearch 6 When updating from Elasticsearch 5 to Elasticsearch 6, you must delete your Jaeger instance, then recreate the Jaeger instance because of an issue with certificates. Re-creating the Jaeger instance triggers creating a new set of certificates. If you are using persistent storage the same volumes can be mounted for the new Jaeger instance as long as the Jaeger name and namespace for the new Jaeger instance are the same as the deleted Jaeger instance. Procedure if Jaeger is installed as part of Red Hat Service Mesh Determine the name of your Jaeger custom resource file: USD oc get jaeger -n istio-system You should see something like the following: NAME AGE jaeger 3d21h Copy the generated custom resource file into a temporary directory: USD oc get jaeger jaeger -oyaml -n istio-system > /tmp/jaeger-cr.yaml Delete the Jaeger instance: USD oc delete jaeger jaeger -n istio-system Recreate the Jaeger instance from your copy of the custom resource file: USD oc create -f /tmp/jaeger-cr.yaml -n istio-system Delete the copy of the generated custom resource file: USD rm /tmp/jaeger-cr.yaml Procedure if Jaeger not installed as part of Red Hat Service Mesh Before you begin, create a copy of your Jaeger custom resource file. Delete the Jaeger instance by deleting the custom resource file: USD oc delete -f <jaeger-cr-file> For example: USD oc delete -f jaeger-prod-elasticsearch.yaml Recreate your Jaeger instance from the backup copy of your custom resource file: USD oc create -f <jaeger-cr-file> Validate that your Pods have restarted: USD oc get pods -n jaeger-system -w 2.1.5.19. New features Red Hat OpenShift Service Mesh 1.1.3 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.1.5.20. New features Red Hat OpenShift Service Mesh 1.1.2 This release of Red Hat OpenShift Service Mesh addresses a security vulnerability. 2.1.5.21. New features Red Hat OpenShift Service Mesh 1.1.1 This release of Red Hat OpenShift Service Mesh adds support for a disconnected installation. 2.1.5.22. New features Red Hat OpenShift Service Mesh 1.1.0 This release of Red Hat OpenShift Service Mesh adds support for Istio 1.4.6 and Jaeger 1.17.1. 2.1.5.22.1. Manual updates from 1.0 to 1.1 If you are updating from Red Hat OpenShift Service Mesh 1.0 to 1.1, you must update the ServiceMeshControlPlane resource to update the control plane components to the new version. In the web console, click the Red Hat OpenShift Service Mesh Operator. Click the Project menu and choose the project where your ServiceMeshControlPlane is deployed from the list, for example istio-system . Click the name of your control plane, for example basic-install . Click YAML and add a version field to the spec: of your ServiceMeshControlPlane resource. For example, to update to Red Hat OpenShift Service Mesh 1.1.0, add version: v1.1 . The version field specifies the version of Service Mesh to install and defaults to the latest available version. Note Note that support for Red Hat OpenShift Service Mesh v1.0 ended in October, 2020. You must upgrade to either v1.1 or v2.0. 2.1.6. Deprecated features Some features available in releases have been deprecated or removed. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. 2.1.6.1. Deprecated features Red Hat OpenShift Service Mesh 1.1.5 The following custom resources were deprecated in release 1.1.5 and were removed in release 1.1.12 Policy - The Policy resource is deprecated and will be replaced by the PeerAuthentication resource in a future release. MeshPolicy - The MeshPolicy resource is deprecated and will be replaced by the PeerAuthentication resource in a future release. v1alpha1 RBAC API -The v1alpha1 RBAC policy is deprecated by the v1beta1 AuthorizationPolicy . RBAC (Role Based Access Control) defines ServiceRole and ServiceRoleBinding objects. ServiceRole ServiceRoleBinding RbacConfig - RbacConfig implements the Custom Resource Definition for controlling Istio RBAC behavior. ClusterRbacConfig (versions prior to Red Hat OpenShift Service Mesh 1.0) ServiceMeshRbacConfig (Red Hat OpenShift Service Mesh version 1.0 and later) In Kiali, the login and LDAP strategies are deprecated. A future version will introduce authentication using OpenID providers. The following components are also deprecated in this release and will be replaced by the Istiod component in a future release. Mixer - access control and usage policies Pilot - service discovery and proxy configuration Citadel - certificate generation Galley - configuration validation and distribution 2.1.7. Known issues These limitations exist in Red Hat OpenShift Service Mesh: Red Hat OpenShift Service Mesh does not support IPv6 , as it is not supported by the upstream Istio project, nor fully supported by OpenShift Container Platform. Graph layout - The layout for the Kiali graph can render differently, depending on your application architecture and the data to display (number of graph nodes and their interactions). Because it is difficult if not impossible to create a single layout that renders nicely for every situation, Kiali offers a choice of several different layouts. To choose a different layout, you can choose a different Layout Schema from the Graph Settings menu. The first time you access related services such as Jaeger and Grafana, from the Kiali console, you must accept the certificate and re-authenticate using your OpenShift Container Platform login credentials. This happens due to an issue with how the framework displays embedded pages in the console. 2.1.7.1. Service Mesh known issues These are the known issues in Red Hat OpenShift Service Mesh: Jaeger/Kiali Operator upgrade blocked with operator pending When upgrading the Jaeger or Kiali Operators with Service Mesh 1.0.x installed, the operator status shows as Pending. Workaround: See the linked Knowledge Base article for more information. Istio-14743 Due to limitations in the version of Istio that this release of Red Hat OpenShift Service Mesh is based on, there are several applications that are currently incompatible with Service Mesh. See the linked community issue for details. MAISTRA-858 The following Envoy log messages describing deprecated options and configurations associated with Istio 1.1.x are expected: [2019-06-03 07:03:28.943][19][warning][misc] [external/envoy/source/common/protobuf/utility.cc:129] Using deprecated option 'envoy.api.v2.listener.Filter.config'. This configuration will be removed from Envoy soon. [2019-08-12 22:12:59.001][13][warning][misc] [external/envoy/source/common/protobuf/utility.cc:174] Using deprecated option 'envoy.api.v2.Listener.use_original_dst' from file lds.proto. This configuration will be removed from Envoy soon. MAISTRA-806 Evicted Istio Operator Pod causes mesh and CNI not to deploy. Workaround: If the istio-operator pod is evicted while deploying the control pane, delete the evicted istio-operator pod. MAISTRA-681 When the control plane has many namespaces, it can lead to performance issues. MAISTRA-465 The Maistra Operator fails to create a service for operator metrics. MAISTRA-453 If you create a new project and deploy pods immediately, sidecar injection does not occur. The operator fails to add the maistra.io/member-of before the pods are created, therefore the pods must be deleted and recreated for sidecar injection to occur. MAISTRA-158 Applying multiple gateways referencing the same hostname will cause all gateways to stop functioning. 2.1.7.2. Kiali known issues Note New issues for Kiali should be created in the OpenShift Service Mesh project with the Component set to Kiali . These are the known issues in Kiali: KIALI-2206 When you are accessing the Kiali console for the first time, and there is no cached browser data for Kiali, the "View in Grafana" link on the Metrics tab of the Kiali Service Details page redirects to the wrong location. The only way you would encounter this issue is if you are accessing Kiali for the first time. KIALI-507 Kiali does not support Internet Explorer 11. This is because the underlying frameworks do not support Internet Explorer. To access the Kiali console, use one of the two most recent versions of the Chrome, Edge, Firefox or Safari browser. 2.1.7.3. Red Hat OpenShift distributed tracing known issues These limitations exist in Red Hat OpenShift distributed tracing: Apache Spark is not supported. The streaming deployment via AMQ/Kafka is unsupported on IBM Z and IBM Power Systems. These are the known issues for Red Hat OpenShift distributed tracing: TRACING-2057 The Kafka API has been updated to v1beta2 to support the Strimzi Kafka Operator 0.23.0. However, this API version is not supported by AMQ Streams 1.6.3. If you have the following environment, your Jaeger services will not be upgraded, and you cannot create new Jaeger services or modify existing Jaeger services: Jaeger Operator channel: 1.17.x stable or 1.20.x stable AMQ Streams Operator channel: amq-streams-1.6.x To resolve this issue, switch the subscription channel for your AMQ Streams Operator to either amq-streams-1.7.x or stable . 2.1.8. Fixed issues The following issues been resolved in the current release: 2.1.8.1. Service Mesh fixed issues MAISTRA-2371 Handle tombstones in listerInformer. The updated cache codebase was not handling tombstones when translating the events from the namespace caches to the aggregated cache, leading to a panic in the go routine. OSSM-542 Galley is not using the new certificate after rotation. OSSM-99 Workloads generated from direct pod without labels may crash Kiali. OSSM-93 IstioConfigList can't filter by two or more names. OSSM-92 Cancelling unsaved changes on the VS/DR YAML edit page does not cancel the changes. OSSM-90 Traces not available on the service details page. MAISTRA-1649 Headless services conflict when in different namespaces. When deploying headless services within different namespaces the endpoint configuration is merged and results in invalid Envoy configurations being pushed to the sidecars. MAISTRA-1541 Panic in kubernetesenv when the controller is not set on owner reference. If a pod has an ownerReference which does not specify the controller, this will cause a panic within the kubernetesenv cache.go code. MAISTRA-1352 Cert-manager Custom Resource Definitions (CRD) from the control plane installation have been removed for this release and future releases. If you have already installed Red Hat OpenShift Service Mesh, the CRDs must be removed manually if cert-manager is not being used. MAISTRA-1001 Closing HTTP/2 connections could lead to segmentation faults in istio-proxy . MAISTRA-932 Added the requires metadata to add dependency relationship between Jaeger Operator and OpenShift Elasticsearch Operator. Ensures that when the Jaeger Operator is installed, it automatically deploys the OpenShift Elasticsearch Operator if it is not available. MAISTRA-862 Galley dropped watches and stopped providing configuration to other components after many namespace deletions and re-creations. MAISTRA-833 Pilot stopped delivering configuration after many namespace deletions and re-creations. MAISTRA-684 The default Jaeger version in the istio-operator is 1.12.0, which does not match Jaeger version 1.13.1 that shipped in Red Hat OpenShift Service Mesh 0.12.TechPreview. MAISTRA-622 In Maistra 0.12.0/TP12, permissive mode does not work. The user has the option to use Plain text mode or Mutual TLS mode, but not permissive. MAISTRA-572 Jaeger cannot be used with Kiali. In this release Jaeger is configured to use the OAuth proxy, but is also only configured to work through a browser and does not allow service access. Kiali cannot properly communicate with the Jaeger endpoint and it considers Jaeger to be disabled. See also TRACING-591 . MAISTRA-357 In OpenShift 4 Beta on AWS, it is not possible, by default, to access a TCP or HTTPS service through the ingress gateway on a port other than port 80. The AWS load balancer has a health check that verifies if port 80 on the service endpoint is active. Without a service running on port 80, the load balancer health check fails. MAISTRA-348 OpenShift 4 Beta on AWS does not support ingress gateway traffic on ports other than 80 or 443. If you configure your ingress gateway to handle TCP traffic with a port number other than 80 or 443, you have to use the service hostname provided by the AWS load balancer rather than the OpenShift router as a workaround. MAISTRA-193 Unexpected console info messages are visible when health checking is enabled for citadel. Bug 1821432 Toggle controls in OpenShift Container Platform Control Resource details page do not update the CR correctly. UI Toggle controls in the Service Mesh Control Plane (SMCP) Overview page in the OpenShift Container Platform web console sometimes update the wrong field in the resource. To update a ServiceMeshControlPlane resource, edit the YAML content directly or update the resource from the command line instead of clicking the toggle controls. 2.1.8.2. Kiali fixed issues KIALI-3239 If a Kiali Operator pod has failed with a status of "Evicted" it blocks the Kiali operator from deploying. The workaround is to delete the Evicted pod and redeploy the Kiali operator. KIALI-3118 After changes to the ServiceMeshMemberRoll, for example adding or removing projects, the Kiali pod restarts and then displays errors on the Graph page while the Kiali pod is restarting. KIALI-3096 Runtime metrics fail in Service Mesh. There is an OAuth filter between the Service Mesh and Prometheus, requiring a bearer token to be passed to Prometheus before access is granted. Kiali has been updated to use this token when communicating to the Prometheus server, but the application metrics are currently failing with 403 errors. KIALI-3070 This bug only affects custom dashboards, not the default dashboards. When you select labels in metrics settings and refresh the page, your selections are retained in the menu but your selections are not displayed on the charts. KIALI-2686 When the control plane has many namespaces, it can lead to performance issues. 2.1.8.3. Red Hat OpenShift distributed tracing fixed issues TRACING-2337 Jaeger is logging a repetitive warning message in the Jaeger logs similar to the following: {"level":"warn","ts":1642438880.918793,"caller":"channelz/logging.go:62","msg":"[core]grpc: Server.Serve failed to create ServerTransport: connection error: desc = \"transport: http2Server.HandleStreams received bogus greeting from client: \\\"\\\\x16\\\\x03\\\\x01\\\\x02\\\\x00\\\\x01\\\\x00\\\\x01\\\\xfc\\\\x03\\\\x03vw\\\\x1a\\\\xc9T\\\\xe7\\\\xdaCj\\\\xb7\\\\x8dK\\\\xa6\\\"\"","system":"grpc","grpc_log":true} This issue was resolved by exposing only the HTTP(S) port of the query service, and not the gRPC port. TRACING-2009 The Jaeger Operator has been updated to include support for the Strimzi Kafka Operator 0.23.0. TRACING-1907 The Jaeger agent sidecar injection was failing due to missing config maps in the application namespace. The config maps were getting automatically deleted due to an incorrect OwnerReference field setting and as a result, the application pods were not moving past the "ContainerCreating" stage. The incorrect settings have been removed. TRACING-1725 Follow-up to TRACING-1631. Additional fix to ensure that Elasticsearch certificates are properly reconciled when there are multiple Jaeger production instances, using same name but within different namespaces. See also BZ-1918920 . TRACING-1631 Multiple Jaeger production instances, using same name but within different namespaces, causing Elasticsearch certificate issue. When multiple service meshes were installed, all of the Jaeger Elasticsearch instances had the same Elasticsearch secret instead of individual secrets, which prevented the OpenShift Elasticsearch Operator from communicating with all of the Elasticsearch clusters. TRACING-1300 Failed connection between Agent and Collector when using Istio sidecar. An update of the Jaeger Operator enabled TLS communication by default between a Jaeger sidecar agent and the Jaeger Collector. TRACING-1208 Authentication "500 Internal Error" when accessing Jaeger UI. When trying to authenticate to the UI using OAuth, I get a 500 error because oauth-proxy sidecar doesn't trust the custom CA bundle defined at installation time with the additionalTrustBundle . TRACING-1166 It is not currently possible to use the Jaeger streaming strategy within a disconnected environment. When a Kafka cluster is being provisioned, it results in a error: Failed to pull image registry.redhat.io/amq7/amq-streams-kafka-24-rhel7@sha256:f9ceca004f1b7dccb3b82d9a8027961f9fe4104e0ed69752c0bdd8078b4a1076 . TRACING-809 Jaeger Ingester is incompatible with Kafka 2.3. When there are two or more instances of the Jaeger Ingester and enough traffic it will continuously generate rebalancing messages in the logs. This is due to a regression in Kafka 2.3 that was fixed in Kafka 2.3.1. For more information, see Jaegertracing-1819 . BZ-1918920 / LOG-1619 The Elasticsearch pods does not get restarted automatically after an update. Workaround: Restart the pods manually. 2.2. Understanding Service Mesh Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . Red Hat OpenShift Service Mesh provides a platform for behavioral insight and operational control over your networked microservices in a service mesh. With Red Hat OpenShift Service Mesh, you can connect, secure, and monitor microservices in your OpenShift Container Platform environment. 2.2.1. Understanding service mesh A service mesh is the network of microservices that make up applications in a distributed microservice architecture and the interactions between those microservices. When a Service Mesh grows in size and complexity, it can become harder to understand and manage. Based on the open source Istio project, Red Hat OpenShift Service Mesh adds a transparent layer on existing distributed applications without requiring any changes to the service code. You add Red Hat OpenShift Service Mesh support to services by deploying a special sidecar proxy to relevant services in the mesh that intercepts all network communication between microservices. You configure and manage the Service Mesh using the Service Mesh control plane features. Red Hat OpenShift Service Mesh gives you an easy way to create a network of deployed services that provide: Discovery Load balancing Service-to-service authentication Failure recovery Metrics Monitoring Red Hat OpenShift Service Mesh also provides more complex operational functions including: A/B testing Canary releases Access control End-to-end authentication 2.2.2. Red Hat OpenShift Service Mesh Architecture Red Hat OpenShift Service Mesh is logically split into a data plane and a control plane: The data plane is a set of intelligent proxies deployed as sidecars. These proxies intercept and control all inbound and outbound network communication between microservices in the service mesh. Sidecar proxies also communicate with Mixer, the general-purpose policy and telemetry hub. Envoy proxy intercepts all inbound and outbound traffic for all services in the service mesh. Envoy is deployed as a sidecar to the relevant service in the same pod. The control plane manages and configures proxies to route traffic, and configures Mixers to enforce policies and collect telemetry. Mixer enforces access control and usage policies (such as authorization, rate limits, quotas, authentication, and request tracing) and collects telemetry data from the Envoy proxy and other services. Pilot configures the proxies at runtime. Pilot provides service discovery for the Envoy sidecars, traffic management capabilities for intelligent routing (for example, A/B tests or canary deployments), and resiliency (timeouts, retries, and circuit breakers). Citadel issues and rotates certificates. Citadel provides strong service-to-service and end-user authentication with built-in identity and credential management. You can use Citadel to upgrade unencrypted traffic in the service mesh. Operators can enforce policies based on service identity rather than on network controls using Citadel. Galley ingests the service mesh configuration, then validates, processes, and distributes the configuration. Galley protects the other service mesh components from obtaining user configuration details from OpenShift Container Platform. Red Hat OpenShift Service Mesh also uses the istio-operator to manage the installation of the control plane. An Operator is a piece of software that enables you to implement and automate common activities in your OpenShift Container Platform cluster. It acts as a controller, allowing you to set or change the desired state of objects in your cluster. 2.2.3. Understanding Kiali Kiali provides visibility into your service mesh by showing you the microservices in your service mesh, and how they are connected. 2.2.3.1. Kiali overview Kiali provides observability into the Service Mesh running on OpenShift Container Platform. Kiali helps you define, validate, and observe your Istio service mesh. It helps you to understand the structure of your service mesh by inferring the topology, and also provides information about the health of your service mesh. Kiali provides an interactive graph view of your namespace in real time that provides visibility into features like circuit breakers, request rates, latency, and even graphs of traffic flows. Kiali offers insights about components at different levels, from Applications to Services and Workloads, and can display the interactions with contextual information and charts on the selected graph node or edge. Kiali also provides the ability to validate your Istio configurations, such as gateways, destination rules, virtual services, mesh policies, and more. Kiali provides detailed metrics, and a basic Grafana integration is available for advanced queries. Distributed tracing is provided by integrating Jaeger into the Kiali console. Kiali is installed by default as part of the Red Hat OpenShift Service Mesh. 2.2.3.2. Kiali architecture Kiali is based on the open source Kiali project . Kiali is composed of two components: the Kiali application and the Kiali console. Kiali application (back end) - This component runs in the container application platform and communicates with the service mesh components, retrieves and processes data, and exposes this data to the console. The Kiali application does not need storage. When deploying the application to a cluster, configurations are set in ConfigMaps and secrets. Kiali console (front end) - The Kiali console is a web application. The Kiali application serves the Kiali console, which then queries the back end for data to present it to the user. In addition, Kiali depends on external services and components provided by the container application platform and Istio. Red Hat Service Mesh (Istio) - Istio is a Kiali requirement. Istio is the component that provides and controls the service mesh. Although Kiali and Istio can be installed separately, Kiali depends on Istio and will not work if it is not present. Kiali needs to retrieve Istio data and configurations, which are exposed through Prometheus and the cluster API. Prometheus - A dedicated Prometheus instance is included as part of the Red Hat OpenShift Service Mesh installation. When Istio telemetry is enabled, metrics data are stored in Prometheus. Kiali uses this Prometheus data to determine the mesh topology, display metrics, calculate health, show possible problems, and so on. Kiali communicates directly with Prometheus and assumes the data schema used by Istio Telemetry. Prometheus is an Istio dependency and a hard dependency for Kiali, and many of Kiali's features will not work without Prometheus. Cluster API - Kiali uses the API of the OpenShift Container Platform (cluster API) to fetch and resolve service mesh configurations. Kiali queries the cluster API to retrieve, for example, definitions for namespaces, services, deployments, pods, and other entities. Kiali also makes queries to resolve relationships between the different cluster entities. The cluster API is also queried to retrieve Istio configurations like virtual services, destination rules, route rules, gateways, quotas, and so on. Jaeger - Jaeger is optional, but is installed by default as part of the Red Hat OpenShift Service Mesh installation. When you install the distributed tracing platform as part of the default Red Hat OpenShift Service Mesh installation, the Kiali console includes a tab to display distributed tracing data. Note that tracing data will not be available if you disable Istio's distributed tracing feature. Also note that user must have access to the namespace where the Service Mesh control plane is installed to view tracing data. Grafana - Grafana is optional, but is installed by default as part of the Red Hat OpenShift Service Mesh installation. When available, the metrics pages of Kiali display links to direct the user to the same metric in Grafana. Note that user must have access to the namespace where the Service Mesh control plane is installed to view links to the Grafana dashboard and view Grafana data. 2.2.3.3. Kiali features The Kiali console is integrated with Red Hat Service Mesh and provides the following capabilities: Health - Quickly identify issues with applications, services, or workloads. Topology - Visualize how your applications, services, or workloads communicate via the Kiali graph. Metrics - Predefined metrics dashboards let you chart service mesh and application performance for Go, Node.js. Quarkus, Spring Boot, Thorntail and Vert.x. You can also create your own custom dashboards. Tracing - Integration with Jaeger lets you follow the path of a request through various microservices that make up an application. Validations - Perform advanced validations on the most common Istio objects (Destination Rules, Service Entries, Virtual Services, and so on). Configuration - Optional ability to create, update and delete Istio routing configuration using wizards or directly in the YAML editor in the Kiali Console. 2.2.4. Understanding Jaeger Every time a user takes an action in an application, a request is executed by the architecture that may require dozens of different services to participate to produce a response. The path of this request is a distributed transaction. Jaeger lets you perform distributed tracing, which follows the path of a request through various microservices that make up an application. Distributed tracing is a technique that is used to tie the information about different units of work together-usually executed in different processes or hosts-to understand a whole chain of events in a distributed transaction. Distributed tracing lets developers visualize call flows in large service oriented architectures. It can be invaluable in understanding serialization, parallelism, and sources of latency. Jaeger records the execution of individual requests across the whole stack of microservices, and presents them as traces. A trace is a data/execution path through the system. An end-to-end trace is comprised of one or more spans. A span represents a logical unit of work in Jaeger that has an operation name, the start time of the operation, and the duration. Spans may be nested and ordered to model causal relationships. 2.2.4.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use distributed tracing for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With distributed tracing you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis Red Hat OpenShift distributed tracing consists of two main components: Red Hat OpenShift distributed tracing platform - This component is based on the open source Jaeger project . Red Hat OpenShift distributed tracing data collection - This component is based on the open source OpenTelemetry project . Both of these components are based on the vendor-neutral OpenTracing APIs and instrumentation. 2.2.4.2. Distributed tracing architecture The distributed tracing platform is based on the open source Jaeger project . The distributed tracing platform is made up of several components that work together to collect, store, and display tracing data. Jaeger Client (Tracer, Reporter, instrumented application, client libraries)- Jaeger clients are language specific implementations of the OpenTracing API. They can be used to instrument applications for distributed tracing either manually or with a variety of existing open source frameworks, such as Camel (Fuse), Spring Boot (RHOAR), MicroProfile (RHOAR/Thorntail), Wildfly (EAP), and many more, that are already integrated with OpenTracing. Jaeger Agent (Server Queue, Processor Workers) - The Jaeger agent is a network daemon that listens for spans sent over User Datagram Protocol (UDP), which it batches and sends to the collector. The agent is meant to be placed on the same host as the instrumented application. This is typically accomplished by having a sidecar in container environments like Kubernetes. Jaeger Collector (Queue, Workers) - Similar to the Agent, the Collector is able to receive spans and place them in an internal queue for processing. This allows the collector to return immediately to the client/agent instead of waiting for the span to make its way to the storage. Storage (Data Store) - Collectors require a persistent storage backend. Jaeger has a pluggable mechanism for span storage. Note that for this release, the only supported storage is Elasticsearch. Query (Query Service) - Query is a service that retrieves traces from storage. Ingester (Ingester Service) - Jaeger can use Apache Kafka as a buffer between the collector and the actual backing storage (Elasticsearch). Ingester is a service that reads data from Kafka and writes to another storage backend (Elasticsearch). Jaeger Console - Jaeger provides a user interface that lets you visualize your distributed tracing data. On the Search page, you can find traces and explore details of the spans that make up an individual trace. 2.2.4.3. Red Hat OpenShift distributed tracing features Red Hat OpenShift distributed tracing provides the following capabilities: Integration with Kiali - When properly configured, you can view distributed tracing data from the Kiali console. High scalability - The distributed tracing back end is designed to have no single points of failure and to scale with the business needs. Distributed Context Propagation - Enables you to connect data from different components together to create a complete end-to-end trace. Backwards compatibility with Zipkin - Red Hat OpenShift distributed tracing has APIs that enable it to be used as a drop-in replacement for Zipkin, but Red Hat is not supporting Zipkin compatibility in this release. 2.2.5. steps Prepare to install Red Hat OpenShift Service Mesh in your OpenShift Container Platform environment. 2.3. Service Mesh and Istio differences Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . An installation of Red Hat OpenShift Service Mesh differs from upstream Istio community installations in multiple ways. The modifications to Red Hat OpenShift Service Mesh are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform. The current release of Red Hat OpenShift Service Mesh differs from the current upstream Istio community release in the following ways: 2.3.1. Multitenant installations Whereas upstream Istio takes a single tenant approach, Red Hat OpenShift Service Mesh supports multiple independent control planes within the cluster. Red Hat OpenShift Service Mesh uses a multitenant operator to manage the control plane lifecycle. Red Hat OpenShift Service Mesh installs a multitenant control plane by default. You specify the projects that can access the Service Mesh, and isolate the Service Mesh from other control plane instances. 2.3.1.1. Multitenancy versus cluster-wide installations The main difference between a multitenant installation and a cluster-wide installation is the scope of privileges used by istod. The components no longer use cluster-scoped Role Based Access Control (RBAC) resource ClusterRoleBinding . Every project in the ServiceMeshMemberRoll members list will have a RoleBinding for each service account associated with the control plane deployment and each control plane deployment will only watch those member projects. Each member project has a maistra.io/member-of label added to it, where the member-of value is the project containing the control plane installation. Red Hat OpenShift Service Mesh configures each member project to ensure network access between itself, the control plane, and other member projects. The exact configuration differs depending on how OpenShift Container Platform software-defined networking (SDN) is configured. See About OpenShift SDN for additional details. If the OpenShift Container Platform cluster is configured to use the SDN plug-in: NetworkPolicy : Red Hat OpenShift Service Mesh creates a NetworkPolicy resource in each member project allowing ingress to all pods from the other members and the control plane. If you remove a member from Service Mesh, this NetworkPolicy resource is deleted from the project. Note This also restricts ingress to only member projects. If you require ingress from non-member projects, you need to create a NetworkPolicy to allow that traffic through. Multitenant : Red Hat OpenShift Service Mesh joins the NetNamespace for each member project to the NetNamespace of the control plane project (the equivalent of running oc adm pod-network join-projects --to control-plane-project member-project ). If you remove a member from the Service Mesh, its NetNamespace is isolated from the control plane (the equivalent of running oc adm pod-network isolate-projects member-project ). Subnet : No additional configuration is performed. 2.3.1.2. Cluster scoped resources Upstream Istio has two cluster scoped resources that it relies on. The MeshPolicy and the ClusterRbacConfig . These are not compatible with a multitenant cluster and have been replaced as described below. ServiceMeshPolicy replaces MeshPolicy for configuration of control-plane-wide authentication policies. This must be created in the same project as the control plane. ServicemeshRbacConfig replaces ClusterRbacConfig for configuration of control-plane-wide role based access control. This must be created in the same project as the control plane. 2.3.2. Differences between Istio and Red Hat OpenShift Service Mesh An installation of Red Hat OpenShift Service Mesh differs from an installation of Istio in multiple ways. The modifications to Red Hat OpenShift Service Mesh are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform. 2.3.2.1. Command line tool The command line tool for Red Hat OpenShift Service Mesh is oc. Red Hat OpenShift Service Mesh does not support istioctl. 2.3.2.2. Automatic injection The upstream Istio community installation automatically injects the sidecar into pods within the projects you have labeled. Red Hat OpenShift Service Mesh does not automatically inject the sidecar to any pods, but requires you to opt in to injection using an annotation without labeling projects. This method requires fewer privileges and does not conflict with other OpenShift capabilities such as builder pods. To enable automatic injection you specify the sidecar.istio.io/inject annotation as described in the Automatic sidecar injection section. 2.3.2.3. Istio Role Based Access Control features Istio Role Based Access Control (RBAC) provides a mechanism you can use to control access to a service. You can identify subjects by user name or by specifying a set of properties and apply access controls accordingly. The upstream Istio community installation includes options to perform exact header matches, match wildcards in headers, or check for a header containing a specific prefix or suffix. Red Hat OpenShift Service Mesh extends the ability to match request headers by using a regular expression. Specify a property key of request.regex.headers with a regular expression. Upstream Istio community matching request headers example apiVersion: "rbac.istio.io/v1alpha1" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: "cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account" properties: request.headers[<header>]: "value" Red Hat OpenShift Service Mesh matching request headers by using regular expressions apiVersion: "rbac.istio.io/v1alpha1" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: "cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account" properties: request.regex.headers[<header>]: "<regular expression>" 2.3.2.4. OpenSSL Red Hat OpenShift Service Mesh replaces BoringSSL with OpenSSL. OpenSSL is a software library that contains an open source implementation of the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. The Red Hat OpenShift Service Mesh Proxy binary dynamically links the OpenSSL libraries (libssl and libcrypto) from the underlying Red Hat Enterprise Linux operating system. 2.3.2.5. Component modifications A maistra-version label has been added to all resources. All Ingress resources have been converted to OpenShift Route resources. Grafana, Tracing (Jaeger), and Kiali are enabled by default and exposed through OpenShift routes. Godebug has been removed from all templates The istio-multi ServiceAccount and ClusterRoleBinding have been removed, as well as the istio-reader ClusterRole. 2.3.2.6. Envoy, Secret Discovery Service, and certificates Red Hat OpenShift Service Mesh does not support QUIC-based services. Deployment of TLS certificates using the Secret Discovery Service (SDS) functionality of Istio is not currently supported in Red Hat OpenShift Service Mesh. The Istio implementation depends on a nodeagent container that uses hostPath mounts. 2.3.2.7. Istio Container Network Interface (CNI) plug-in Red Hat OpenShift Service Mesh includes CNI plug-in, which provides you with an alternate way to configure application pod networking. The CNI plug-in replaces the init-container network configuration eliminating the need to grant service accounts and projects access to Security Context Constraints (SCCs) with elevated privileges. 2.3.2.8. Routes for Istio Gateways OpenShift routes for Istio Gateways are automatically managed in Red Hat OpenShift Service Mesh. Every time an Istio Gateway is created, updated or deleted inside the service mesh, an OpenShift route is created, updated or deleted. A Red Hat OpenShift Service Mesh control plane component called Istio OpenShift Routing (IOR) synchronizes the gateway route. For more information, see Automatic route creation. 2.3.2.8.1. Catch-all domains Catch-all domains ("*") are not supported. If one is found in the Gateway definition, Red Hat OpenShift Service Mesh will create the route, but will rely on OpenShift to create a default hostname. This means that the newly created route will not be a catch all ("*") route, instead it will have a hostname in the form <route-name>[-<project>].<suffix> . See the OpenShift documentation for more information about how default hostnames work and how a cluster administrator can customize it. 2.3.2.8.2. Subdomains Subdomains (e.g.: "*.domain.com") are supported. However this ability doesn't come enabled by default in OpenShift Container Platform. This means that Red Hat OpenShift Service Mesh will create the route with the subdomain, but it will only be in effect if OpenShift Container Platform is configured to enable it. 2.3.2.8.3. Transport layer security Transport Layer Security (TLS) is supported. This means that, if the Gateway contains a tls section, the OpenShift Route will be configured to support TLS. Additional resources Automatic route creation 2.3.3. Kiali and service mesh Installing Kiali via the Service Mesh on OpenShift Container Platform differs from community Kiali installations in multiple ways. These modifications are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform. Kiali has been enabled by default. Ingress has been enabled by default. Updates have been made to the Kiali ConfigMap. Updates have been made to the ClusterRole settings for Kiali. Do not edit the ConfigMap, because your changes might be overwritten by the Service Mesh or Kiali Operators. Files that the Kiali Operator manages have a kiali.io/ label or annotation. Updating the Operator files should be restricted to those users with cluster-admin privileges. If you use Red Hat OpenShift Dedicated, updating the Operator files should be restricted to those users with dedicated-admin privileges. 2.3.4. Distributed tracing and service mesh Installing the distributed tracing platform with the Service Mesh on OpenShift Container Platform differs from community Jaeger installations in multiple ways. These modifications are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform. Distributed tracing has been enabled by default for Service Mesh. Ingress has been enabled by default for Service Mesh. The name for the Zipkin port name has changed to jaeger-collector-zipkin (from http ) Jaeger uses Elasticsearch for storage by default when you select either the production or streaming deployment option. The community version of Istio provides a generic "tracing" route. Red Hat OpenShift Service Mesh uses a "jaeger" route that is installed by the Red Hat OpenShift distributed tracing platform Operator and is already protected by OAuth. Red Hat OpenShift Service Mesh uses a sidecar for the Envoy proxy, and Jaeger also uses a sidecar, for the Jaeger agent. These two sidecars are configured separately and should not be confused with each other. The proxy sidecar creates spans related to the pod's ingress and egress traffic. The agent sidecar receives the spans emitted by the application and sends them to the Jaeger Collector. 2.4. Preparing to install Service Mesh Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . Before you can install Red Hat OpenShift Service Mesh, review the installation activities, ensure that you meet the prerequisites: 2.4.1. Prerequisites Possess an active OpenShift Container Platform subscription on your Red Hat account. If you do not have a subscription, contact your sales representative for more information. Review the OpenShift Container Platform 4.7 overview . Install OpenShift Container Platform 4.7. Install OpenShift Container Platform 4.7 on AWS Install OpenShift Container Platform 4.7 on user-provisioned AWS Install OpenShift Container Platform 4.7 on bare metal Install OpenShift Container Platform 4.7 on vSphere Note If you are installing Red Hat OpenShift Service Mesh on a restricted network , follow the instructions for your chosen OpenShift Container Platform infrastructure. Install the version of the OpenShift Container Platform command line utility (the oc client tool) that matches your OpenShift Container Platform version and add it to your path. If you are using OpenShift Container Platform 4.7, see About the OpenShift CLI . 2.4.2. Red Hat OpenShift Service Mesh supported configurations The following are the only supported configurations for the Red Hat OpenShift Service Mesh: OpenShift Container Platform version 4.6 or later. Note OpenShift Online and Red Hat OpenShift Dedicated are not supported for Red Hat OpenShift Service Mesh. The deployment must be contained within a single OpenShift Container Platform cluster that is not federated. This release of Red Hat OpenShift Service Mesh is only available on OpenShift Container Platform x86_64. This release only supports configurations where all Service Mesh components are contained in the OpenShift Container Platform cluster in which it operates. It does not support management of microservices that reside outside of the cluster, or in a multi-cluster scenario. This release only supports configurations that do not integrate external services such as virtual machines. For additional information about Red Hat OpenShift Service Mesh lifecycle and supported configurations, refer to the Support Policy . 2.4.2.1. Supported configurations for Kiali on Red Hat OpenShift Service Mesh The Kiali observability console is only supported on the two most recent releases of the Chrome, Edge, Firefox, or Safari browsers. 2.4.2.2. Supported Mixer adapters This release only supports the following Mixer adapter: 3scale Istio Adapter 2.4.3. Operator overview Red Hat OpenShift Service Mesh requires the following four Operators: OpenShift Elasticsearch - (Optional) Provides database storage for tracing and logging with the distributed tracing platform. It is based on the open source Elasticsearch project. Red Hat OpenShift distributed tracing platform - Provides distributed tracing to monitor and troubleshoot transactions in complex distributed systems. It is based on the open source Jaeger project. Kiali - Provides observability for your service mesh. Allows you to view configurations, monitor traffic, and analyze traces in a single console. It is based on the open source Kiali project. Red Hat OpenShift Service Mesh - Allows you to connect, secure, control, and observe the microservices that comprise your applications. The Service Mesh Operator defines and monitors the ServiceMeshControlPlane resources that manage the deployment, updating, and deletion of the Service Mesh components. It is based on the open source Istio project. Warning Please see Configuring the log store for details on configuring the default Jaeger parameters for Elasticsearch in a production environment. 2.4.4. steps Install Red Hat OpenShift Service Mesh in your OpenShift Container Platform environment. 2.5. Installing Service Mesh Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . Installing the Service Mesh involves installing the OpenShift Elasticsearch, Jaeger, Kiali and Service Mesh Operators, creating and managing a ServiceMeshControlPlane resource to deploy the control plane, and creating a ServiceMeshMemberRoll resource to specify the namespaces associated with the Service Mesh. Note Mixer's policy enforcement is disabled by default. You must enable it to run policy tasks. See Update Mixer policy enforcement for instructions on enabling Mixer policy enforcement. Note Multi-tenant control plane installations are the default configuration. Note The Service Mesh documentation uses istio-system as the example project, but you can deploy the service mesh to any project. 2.5.1. Prerequisites Follow the Preparing to install Red Hat OpenShift Service Mesh process. An account with the cluster-admin role. The Service Mesh installation process uses the OperatorHub to install the ServiceMeshControlPlane custom resource definition within the openshift-operators project. The Red Hat OpenShift Service Mesh defines and monitors the ServiceMeshControlPlane related to the deployment, update, and deletion of the control plane. Starting with Red Hat OpenShift Service Mesh 1.1.18.2, you must install the OpenShift Elasticsearch Operator, the Jaeger Operator, and the Kiali Operator before the Red Hat OpenShift Service Mesh Operator can install the control plane. 2.5.2. Installing the OpenShift Elasticsearch Operator The default Red Hat OpenShift distributed tracing platform deployment uses in-memory storage because it is designed to be installed quickly for those evaluating Red Hat OpenShift distributed tracing, giving demonstrations, or using Red Hat OpenShift distributed tracing platform in a test environment. If you plan to use Red Hat OpenShift distributed tracing platform in production, you must install and configure a persistent storage option, in this case, Elasticsearch. Prerequisites You have access to the OpenShift Container Platform web console. You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Warning Do not install Community versions of the Operators. Community Operators are not supported. Note If you have already installed the OpenShift Elasticsearch Operator as part of OpenShift Logging, you do not need to install the OpenShift Elasticsearch Operator again. The Red Hat OpenShift distributed tracing platform Operator creates the Elasticsearch instance using the installed OpenShift Elasticsearch Operator. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Operators OperatorHub . Type Elasticsearch into the filter box to locate the OpenShift Elasticsearch Operator. Click the OpenShift Elasticsearch Operator provided by Red Hat to display information about the Operator. Click Install . On the Install Operator page, select the stable Update Channel. This automatically updates your Operator as new versions are released. Accept the default All namespaces on the cluster (default) . This installs the Operator in the default openshift-operators-redhat project and makes the Operator available to all projects in the cluster. Note The Elasticsearch installation requires the openshift-operators-redhat namespace for the OpenShift Elasticsearch Operator. The other Red Hat OpenShift distributed tracing Operators are installed in the openshift-operators namespace. Accept the default Automatic approval strategy. By accepting the default, when a new version of this Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select Manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Note The Manual approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process. Click Install . On the Installed Operators page, select the openshift-operators-redhat project. Wait until you see that the OpenShift Elasticsearch Operator shows a status of "InstallSucceeded" before continuing. 2.5.3. Installing the Red Hat OpenShift distributed tracing platform Operator To install Red Hat OpenShift distributed tracing platform, you use the OperatorHub to install the Red Hat OpenShift distributed tracing platform Operator. By default, the Operator is installed in the openshift-operators project. Prerequisites You have access to the OpenShift Container Platform web console. You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. If you require persistent storage, you must also install the OpenShift Elasticsearch Operator before installing the Red Hat OpenShift distributed tracing platform Operator. Warning Do not install Community versions of the Operators. Community Operators are not supported. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Operators OperatorHub . Type distributing tracing platform into the filter to locate the Red Hat OpenShift distributed tracing platform Operator. Click the Red Hat OpenShift distributed tracing platform Operator provided by Red Hat to display information about the Operator. Click Install . On the Install Operator page, select the stable Update Channel. This automatically updates your Operator as new versions are released. Accept the default All namespaces on the cluster (default) . This installs the Operator in the default openshift-operators project and makes the Operator available to all projects in the cluster. Accept the default Automatic approval strategy. By accepting the default, when a new version of this Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select Manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Note The Manual approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process. Click Install . Navigate to Operators Installed Operators . On the Installed Operators page, select the openshift-operators project. Wait until you see that the Red Hat OpenShift distributed tracing platform Operator shows a status of "Succeeded" before continuing. 2.5.4. Installing the Kiali Operator You must install the Kiali Operator for the Red Hat OpenShift Service Mesh Operator to install the Service Mesh control plane. Warning Do not install Community versions of the Operators. Community Operators are not supported. Prerequisites Access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators OperatorHub . Type Kiali into the filter box to find the Kiali Operator. Click the Kiali Operator provided by Red Hat to display information about the Operator. Click Install . On the Operator Installation page, select the stable Update Channel. Select All namespaces on the cluster (default) . This installs the Operator in the default openshift-operators project and makes the Operator available to all projects in the cluster. Select the Automatic Approval Strategy. Note The Manual approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process. Click Install . The Installed Operators page displays the Kiali Operator's installation progress. 2.5.5. Installing the Operators To install Red Hat OpenShift Service Mesh, install following Operators in this order. Repeat the procedure for each Operator. OpenShift Elasticsearch Red Hat OpenShift distributed tracing platform Kiali Red Hat OpenShift Service Mesh Note If you have already installed the OpenShift Elasticsearch Operator as part of OpenShift Logging, you do not need to install the OpenShift Elasticsearch Operator again. The Red Hat OpenShift distributed tracing platform Operator will create the Elasticsearch instance using the installed OpenShift Elasticsearch Operator. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. In the OpenShift Container Platform web console, click Operators OperatorHub . Type the name of the Operator into the filter box and select the Red Hat version of the Operator. Community versions of the Operators are not supported. Click Install . On the Install Operator page for each Operator, accept the default settings. Click Install . Wait until the Operator has installed before repeating the steps for the Operator in the list. The OpenShift Elasticsearch Operator is installed in the openshift-operators-redhat namespace and is available for all namespaces in the cluster. The Red Hat OpenShift distributed tracing platform is installed in the openshift-distributed-tracing namespace and is available for all namespaces in the cluster. The Kiali and Red Hat OpenShift Service Mesh Operators are installed in the openshift-operators namespace and are available for all namespaces in the cluster. After all you have installed all four Operators, click Operators Installed Operators to verify that your Operators installed. 2.5.6. Deploying the Red Hat OpenShift Service Mesh control plane The ServiceMeshControlPlane resource defines the configuration to be used during installation. You can deploy the default configuration provided by Red Hat or customize the ServiceMeshControlPlane file to fit your business needs. You can deploy the Service Mesh control plane by using the OpenShift Container Platform web console or from the command line using the oc client tool. 2.5.6.1. Deploying the control plane from the web console Follow this procedure to deploy the Red Hat OpenShift Service Mesh control plane by using the web console. In this example, istio-system is the name of the control plane project. Prerequisites The Red Hat OpenShift Service Mesh Operator must be installed. Review the instructions for how to customize the Red Hat OpenShift Service Mesh installation. An account with the cluster-admin role. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Create a project named istio-system . Navigate to Home Projects . Click Create Project . Enter istio-system in the Name field. Click Create . Navigate to Operators Installed Operators . If necessary, select istio-system from the Project menu. You may have to wait a few moments for the Operators to be copied to the new project. Click the Red Hat OpenShift Service Mesh Operator. Under Provided APIs , the Operator provides links to create two resource types: A ServiceMeshControlPlane resource A ServiceMeshMemberRoll resource Under Istio Service Mesh Control Plane click Create ServiceMeshControlPlane . On the Create Service Mesh Control Plane page, modify the YAML for the default ServiceMeshControlPlane template as needed. Note For additional information about customizing the control plane, see customizing the Red Hat OpenShift Service Mesh installation. For production, you must change the default Jaeger template. Click Create to create the control plane. The Operator creates pods, services, and Service Mesh control plane components based on your configuration parameters. Click the Istio Service Mesh Control Plane tab. Click the name of the new control plane. Click the Resources tab to see the Red Hat OpenShift Service Mesh control plane resources the Operator created and configured. 2.5.6.2. Deploying the control plane from the CLI Follow this procedure to deploy the Red Hat OpenShift Service Mesh control plane the command line. Prerequisites The Red Hat OpenShift Service Mesh Operator must be installed. Review the instructions for how to customize the Red Hat OpenShift Service Mesh installation. An account with the cluster-admin role. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 Create a project named istio-system . USD oc new-project istio-system Create a ServiceMeshControlPlane file named istio-installation.yaml using the example found in "Customize the Red Hat OpenShift Service Mesh installation". You can customize the values as needed to match your use case. For production deployments you must change the default Jaeger template. Run the following command to deploy the control plane: USD oc create -n istio-system -f istio-installation.yaml Execute the following command to see the status of the control plane installation. USD oc get smcp -n istio-system The installation has finished successfully when the STATUS column is ComponentsReady . Run the following command to watch the progress of the Pods during the installation process: You should see output similar to the following: Example output NAME READY STATUS RESTARTS AGE grafana-7bf5764d9d-2b2f6 2/2 Running 0 28h istio-citadel-576b9c5bbd-z84z4 1/1 Running 0 28h istio-egressgateway-5476bc4656-r4zdv 1/1 Running 0 28h istio-galley-7d57b47bb7-lqdxv 1/1 Running 0 28h istio-ingressgateway-dbb8f7f46-ct6n5 1/1 Running 0 28h istio-pilot-546bf69578-ccg5x 2/2 Running 0 28h istio-policy-77fd498655-7pvjw 2/2 Running 0 28h istio-sidecar-injector-df45bd899-ctxdt 1/1 Running 0 28h istio-telemetry-66f697d6d5-cj28l 2/2 Running 0 28h jaeger-896945cbc-7lqrr 2/2 Running 0 11h kiali-78d9c5b87c-snjzh 1/1 Running 0 22h prometheus-6dff867c97-gr2n5 2/2 Running 0 28h For a multitenant installation, Red Hat OpenShift Service Mesh supports multiple independent control planes within the cluster. You can create reusable configurations with ServiceMeshControlPlane templates. For more information, see Creating control plane templates . 2.5.7. Creating the Red Hat OpenShift Service Mesh member roll The ServiceMeshMemberRoll lists the projects that belong to the Service Mesh control plane. Only projects listed in the ServiceMeshMemberRoll are affected by the control plane. A project does not belong to a service mesh until you add it to the member roll for a particular control plane deployment. You must create a ServiceMeshMemberRoll resource named default in the same project as the ServiceMeshControlPlane , for example istio-system . 2.5.7.1. Creating the member roll from the web console You can add one or more projects to the Service Mesh member roll from the web console. In this example, istio-system is the name of the Service Mesh control plane project. Prerequisites An installed, verified Red Hat OpenShift Service Mesh Operator. List of existing projects to add to the service mesh. Procedure Log in to the OpenShift Container Platform web console. If you do not already have services for your mesh, or you are starting from scratch, create a project for your applications. It must be different from the project where you installed the Service Mesh control plane. Navigate to Home Projects . Enter a name in the Name field. Click Create . Navigate to Operators Installed Operators . Click the Project menu and choose the project where your ServiceMeshControlPlane resource is deployed from the list, for example istio-system . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Member Roll tab. Click Create ServiceMeshMemberRoll Click Members , then enter the name of your project in the Value field. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. Click Create . 2.5.7.2. Creating the member roll from the CLI You can add a project to the ServiceMeshMemberRoll from the command line. Prerequisites An installed, verified Red Hat OpenShift Service Mesh Operator. List of projects to add to the service mesh. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 If you do not already have services for your mesh, or you are starting from scratch, create a project for your applications. It must be different from the project where you installed the Service Mesh control plane. USD oc new-project <your-project> To add your projects as members, modify the following example YAML. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. In this example, istio-system is the name of the Service Mesh control plane project. Example servicemeshmemberroll-default.yaml apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name Run the following command to upload and create the ServiceMeshMemberRoll resource in the istio-system namespace. USD oc create -n istio-system -f servicemeshmemberroll-default.yaml Run the following command to verify the ServiceMeshMemberRoll was created successfully. USD oc get smmr -n istio-system default The installation has finished successfully when the STATUS column is Configured . 2.5.8. Adding or removing projects from the service mesh You can add or remove projects from an existing Service Mesh ServiceMeshMemberRoll resource using the web console. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. The ServiceMeshMemberRoll resource is deleted when its corresponding ServiceMeshControlPlane resource is deleted. 2.5.8.1. Adding or removing projects from the member roll using the web console Prerequisites An installed, verified Red Hat OpenShift Service Mesh Operator. An existing ServiceMeshMemberRoll resource. Name of the project with the ServiceMeshMemberRoll resource. Names of the projects you want to add or remove from the mesh. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Click the Project menu and choose the project where your ServiceMeshControlPlane resource is deployed from the list, for example istio-system . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Member Roll tab. Click the default link. Click the YAML tab. Modify the YAML to add or remove projects as members. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. Click Save . Click Reload . 2.5.8.2. Adding or removing projects from the member roll using the CLI You can modify an existing Service Mesh member roll using the command line. Prerequisites An installed, verified Red Hat OpenShift Service Mesh Operator. An existing ServiceMeshMemberRoll resource. Name of the project with the ServiceMeshMemberRoll resource. Names of the projects you want to add or remove from the mesh. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI. Edit the ServiceMeshMemberRoll resource. USD oc edit smmr -n <controlplane-namespace> Modify the YAML to add or remove projects as members. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. Example servicemeshmemberroll-default.yaml apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name 2.5.9. Manual updates If you choose to update manually, the Operator Lifecycle Manager (OLM) controls the installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. OLM runs by default in OpenShift Container Platform. OLM uses CatalogSources, which use the Operator Registry API, to query for available Operators as well as upgrades for installed Operators. For more information about how OpenShift Container Platform handled upgrades, refer to the Operator Lifecycle Manager documentation. 2.5.9.1. Updating sidecar proxies In order to update the configuration for sidecar proxies the application administrator must restart the application pods. If your deployment uses automatic sidecar injection, you can update the pod template in the deployment by adding or modifying an annotation. Run the following command to redeploy the pods: USD oc patch deployment/<deployment> -p '{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt": "'`date -Iseconds`'"}}}}}' If your deployment does not use automatic sidecar injection, you must manually update the sidecars by modifying the sidecar container image specified in the deployment or pod, and then restart the pods. 2.5.10. steps Prepare to deploy applications on Red Hat OpenShift Service Mesh. 2.6. Customizing security in a Service Mesh Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . If your service mesh application is constructed with a complex array of microservices, you can use Red Hat OpenShift Service Mesh to customize the security of the communication between those services. The infrastructure of OpenShift Container Platform along with the traffic management features of Service Mesh can help you manage the complexity of your applications and provide service and identity security for microservices. 2.6.1. Enabling mutual Transport Layer Security (mTLS) Mutual Transport Layer Security (mTLS) is a protocol where two parties authenticate each other. It is the default mode of authentication in some protocols (IKE, SSH) and optional in others (TLS). mTLS can be used without changes to the application or service code. The TLS is handled entirely by the service mesh infrastructure and between the two sidecar proxies. By default, Red Hat OpenShift Service Mesh is set to permissive mode, where the sidecars in Service Mesh accept both plain-text traffic and connections that are encrypted using mTLS. If a service in your mesh is communicating with a service outside the mesh, strict mTLS could break communication between those services. Use permissive mode while you migrate your workloads to Service Mesh. 2.6.1.1. Enabling strict mTLS across the mesh If your workloads do not communicate with services outside your mesh and communication will not be interrupted by only accepting encrypted connections, you can enable mTLS across your mesh quickly. Set spec.istio.global.mtls.enabled to true in your ServiceMeshControlPlane resource. The operator creates the required resources. apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true 2.6.1.1.1. Configuring sidecars for incoming connections for specific services You can also configure mTLS for individual services or namespaces by creating a policy. apiVersion: "authentication.istio.io/v1alpha1" kind: "Policy" metadata: name: default namespace: <NAMESPACE> spec: peers: - mtls: {} 2.6.1.2. Configuring sidecars for outgoing connections Create a destination rule to configure Service Mesh to use mTLS when sending requests to other services in the mesh. apiVersion: "networking.istio.io/v1alpha3" kind: "DestinationRule" metadata: name: "default" namespace: <CONTROL_PLANE_NAMESPACE>> spec: host: "*.local" trafficPolicy: tls: mode: ISTIO_MUTUAL 2.6.1.3. Setting the minimum and maximum protocol versions If your environment has specific requirements for encrypted traffic in your service mesh, you can control the cryptographic functions that are allowed by setting the spec.security.controlPlane.tls.minProtocolVersion or spec.security.controlPlane.tls.maxProtocolVersion in your ServiceMeshControlPlane resource. Those values, configured in your control plane resource, define the minimum and maximum TLS version used by mesh components when communicating securely over TLS. apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: tls: minProtocolVersion: TLSv1_2 maxProtocolVersion: TLSv1_3 The default is TLS_AUTO and does not specify a version of TLS. Table 2.3. Valid values Value Description TLS_AUTO default TLSv1_0 TLS version 1.0 TLSv1_1 TLS version 1.1 TLSv1_2 TLS version 1.2 TLSv1_3 TLS version 1.3 2.6.2. Configuring cipher suites and ECDH curves Cipher suites and Elliptic-curve Diffie-Hellman (ECDH curves) can help you secure your service mesh. You can define a comma separated list of cipher suites using spec.istio.global.tls.cipherSuites and ECDH curves using spec.istio.global.tls.ecdhCurves in your ServiceMeshControlPlane resource. If either of these attributes are empty, then the default values are used. The cipherSuites setting is effective if your service mesh uses TLS 1.2 or earlier. It has no effect when negotiating with TLS 1.3. Set your cipher suites in the comma separated list in order of priority. For example, ecdhCurves: CurveP256, CurveP384 sets CurveP256 as a higher priority than CurveP384 . Note You must include either TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 or TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 when you configure the cipher suite. HTTP/2 support requires at least one of these cipher suites. The supported cipher suites are: TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_256_GCM_SHA384 TLS_RSA_WITH_AES_128_CBC_SHA256 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA The supported ECDH Curves are: CurveP256 CurveP384 CurveP521 X25519 2.6.3. Adding an external certificate authority key and certificate By default, Red Hat OpenShift Service Mesh generates self-signed root certificate and key, and uses them to sign the workload certificates. You can also use the user-defined certificate and key to sign workload certificates, with user-defined root certificate. This task demonstrates an example to plug certificates and key into Service Mesh. Prerequisites You must have installed Red Hat OpenShift Service Mesh with mutual TLS enabled to configure certificates. This example uses the certificates from the Maistra repository . For production, use your own certificates from your certificate authority. You must deploy the Bookinfo sample application to verify the results with these instructions. 2.6.3.1. Adding an existing certificate and key To use an existing signing (CA) certificate and key, you must create a chain of trust file that includes the CA certificate, key, and root certificate. You must use the following exact file names for each of the corresponding certificates. The CA certificate is called ca-cert.pem , the key is ca-key.pem , and the root certificate, which signs ca-cert.pem , is called root-cert.pem . If your workload uses intermediate certificates, you must specify them in a cert-chain.pem file. Add the certificates to Service Mesh by following these steps. Save the example certificates from the Maistra repo locally and replace <path> with the path to your certificates. Create a secret cacert that includes the input files ca-cert.pem , ca-key.pem , root-cert.pem and cert-chain.pem . USD oc create secret generic cacerts -n istio-system --from-file=<path>/ca-cert.pem \ --from-file=<path>/ca-key.pem --from-file=<path>/root-cert.pem \ --from-file=<path>/cert-chain.pem In the ServiceMeshControlPlane resource set global.mtls.enabled to true and security.selfSigned set to false . Service Mesh reads the certificates and key from the secret-mount files. apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: false To make sure the workloads add the new certificates promptly, delete the secrets generated by Service Mesh, named istio.* . In this example, istio.default . Service Mesh issues new certificates for the workloads. USD oc delete secret istio.default 2.6.3.2. Verifying your certificates Use the Bookinfo sample application to verify your certificates are mounted correctly. First, retrieve the mounted certificates. Then, verify the certificates mounted on the pod. Store the pod name in the variable RATINGSPOD . USD RATINGSPOD=`oc get pods -l app=ratings -o jsonpath='{.items[0].metadata.name}'` Run the following commands to retrieve the certificates mounted on the proxy. USD oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/root-cert.pem > /tmp/pod-root-cert.pem The file /tmp/pod-root-cert.pem contains the root certificate propagated to the pod. USD oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/cert-chain.pem > /tmp/pod-cert-chain.pem The file /tmp/pod-cert-chain.pem contains the workload certificate and the CA certificate propagated to the pod. Verify the root certificate is the same as the one specified by the Operator. Replace <path> with the path to your certificates. USD openssl x509 -in <path>/root-cert.pem -text -noout > /tmp/root-cert.crt.txt USD openssl x509 -in /tmp/pod-root-cert.pem -text -noout > /tmp/pod-root-cert.crt.txt USD diff /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt Expect the output to be empty. Verify the CA certificate is the same as the one specified by Operator. Replace <path> with the path to your certificates. USD sed '0,/^-----END CERTIFICATE-----/d' /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-ca.pem USD openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt USD openssl x509 -in /tmp/pod-cert-chain-ca.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt USD diff /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt Expect the output to be empty. Verify the certificate chain from the root certificate to the workload certificate. Replace <path> with the path to your certificates. USD head -n 21 /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-workload.pem USD openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) /tmp/pod-cert-chain-workload.pem Example output /tmp/pod-cert-chain-workload.pem: OK 2.6.3.3. Removing the certificates To remove the certificates you added, follow these steps. Remove the secret cacerts . USD oc delete secret cacerts -n istio-system Redeploy Service Mesh with a self-signed root certificate in the ServiceMeshControlPlane resource. apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: true 2.7. Traffic management Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . You can control the flow of traffic and API calls between services in Red Hat OpenShift Service Mesh. For example, some services in your service mesh may need to communicate within the mesh and others may need to be hidden. Manage the traffic to hide specific backend services, expose services, create testing or versioning deployments, or add a security layer on a set of services. 2.7.1. Using gateways You can use a gateway to manage inbound and outbound traffic for your mesh to specify which traffic you want to enter or leave the mesh. Gateway configurations are applied to standalone Envoy proxies that are running at the edge of the mesh, rather than sidecar Envoy proxies running alongside your service workloads. Unlike other mechanisms for controlling traffic entering your systems, such as the Kubernetes Ingress APIs, Red Hat OpenShift Service Mesh gateways allow you to use the full power and flexibility of traffic routing. The Red Hat OpenShift Service Mesh gateway resource can layer 4-6 load balancing properties, such as ports, to expose and configure Red Hat OpenShift Service Mesh TLS settings. Instead of adding application-layer traffic routing (L7) to the same API resource, you can bind a regular Red Hat OpenShift Service Mesh virtual service to the gateway and manage gateway traffic like any other data plane traffic in a service mesh. Gateways are primarily used to manage ingress traffic, but you can also configure egress gateways. An egress gateway lets you configure a dedicated exit node for the traffic leaving the mesh. This enables you to limit which services have access to external networks, which adds security control to your service mesh. You can also use a gateway to configure a purely internal proxy. Gateway example A gateway resource describes a load balancer operating at the edge of the mesh receiving incoming or outgoing HTTP/TCP connections. The specification describes a set of ports that should be exposed, the type of protocol to use, SNI configuration for the load balancer, and so on. The following example shows a sample gateway configuration for external HTTPS ingress traffic: apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ext-host-gwy spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 443 name: https protocol: HTTPS hosts: - ext-host.example.com tls: mode: SIMPLE serverCertificate: /tmp/tls.crt privateKey: /tmp/tls.key This gateway configuration lets HTTPS traffic from ext-host.example.com into the mesh on port 443, but doesn't specify any routing for the traffic. To specify routing and for the gateway to work as intended, you must also bind the gateway to a virtual service. You do this using the virtual service's gateways field, as shown in the following example: apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtual-svc spec: hosts: - ext-host.example.com gateways: - ext-host-gwy You can then configure the virtual service with routing rules for the external traffic. 2.7.2. Configuring an ingress gateway An ingress gateway is a load balancer operating at the edge of the mesh that receives incoming HTTP/TCP connections. It configures exposed ports and protocols but does not include any traffic routing configuration. Traffic routing for ingress traffic is instead configured with routing rules, the same way as for internal service requests. The following steps show how to create a gateway and configure a VirtualService to expose a service in the Bookinfo sample application to outside traffic for paths /productpage and /login . Procedure Create a gateway to accept traffic. Create a YAML file, and copy the following YAML into it. Gateway example gateway.yaml apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" Apply the YAML file. USD oc apply -f gateway.yaml Create a VirtualService object to rewrite the host header. Create a YAML file, and copy the following YAML into it. Virtual service example apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - "*" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080 Apply the YAML file. USD oc apply -f vs.yaml Test that the gateway and VirtualService have been set correctly. Set the Gateway URL. export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}') Set the port number. In this example, istio-system is the name of the Service Mesh control plane project. export TARGET_PORT=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.port.targetPort}') Test a page that has been explicitly exposed. curl -s -I "USDGATEWAY_URL/productpage" The expected result is 200 . 2.7.3. Managing ingress traffic In Red Hat OpenShift Service Mesh, the Ingress Gateway enables features such as monitoring, security, and route rules to apply to traffic that enters the cluster. Use a Service Mesh gateway to expose a service outside of the service mesh. 2.7.3.1. Determining the ingress IP and ports Ingress configuration differs depending on if your environment supports an external load balancer. An external load balancer is set in the ingress IP and ports for the cluster. To determine if your cluster's IP and ports are configured for external load balancers, run the following command. In this example, istio-system is the name of the Service Mesh control plane project. USD oc get svc istio-ingressgateway -n istio-system That command returns the NAME , TYPE , CLUSTER-IP , EXTERNAL-IP , PORT(S) , and AGE of each item in your namespace. If the EXTERNAL-IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is <none> , or perpetually <pending> , your environment does not provide an external load balancer for the ingress gateway. You can access the gateway using the service's node port . 2.7.3.1.1. Determining ingress ports with a load balancer Follow these instructions if your environment has an external load balancer. Procedure Run the following command to set the ingress IP and ports. This command sets a variable in your terminal. USD export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') Run the following command to set the ingress port. USD export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}') Run the following command to set the secure ingress port. USD export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].port}') Run the following command to set the TCP ingress port. USD export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="tcp")].port}') Note In some environments, the load balancer may be exposed using a hostname instead of an IP address. For that case, the ingress gateway's EXTERNAL-IP value is not an IP address. Instead, it's a hostname, and the command fails to set the INGRESS_HOST environment variable. In that case, use the following command to correct the INGRESS_HOST value: USD export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') 2.7.3.1.2. Determining ingress ports without a load balancer If your environment does not have an external load balancer, determine the ingress ports and use a node port instead. Procedure Set the ingress ports. USD export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}') Run the following command to set the secure ingress port. USD export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}') Run the following command to set the TCP ingress port. USD export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="tcp")].nodePort}') 2.7.4. Automatic route creation OpenShift routes for Istio Gateways are automatically managed in Red Hat OpenShift Service Mesh. Every time an Istio Gateway is created, updated or deleted inside the service mesh, an OpenShift route is created, updated or deleted. 2.7.4.1. Enabling Automatic Route Creation A Red Hat OpenShift Service Mesh control plane component called Istio OpenShift Routing (IOR) synchronizes the gateway route. Enable IOR as part of the control plane deployment. If the Gateway contains a TLS section, the OpenShift Route will be configured to support TLS. In the ServiceMeshControlPlane resource, add the ior_enabled parameter and set it to true . For example, see the following resource snippet: spec: istio: gateways: istio-egressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 istio-ingressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 ior_enabled: true 2.7.4.2. Subdomains Red Hat OpenShift Service Mesh creates the route with the subdomain, but OpenShift Container Platform must be configured to enable it. Subdomains, for example *.domain.com , are supported but not by default. Configure an OpenShift Container Platform wildcard policy before configuring a wildcard host Gateway. For more information, see the "Links" section. If the following gateway is created: apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway1 spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - www.bookinfo.com - bookinfo.example.com Then, the following OpenShift Routes are created automatically. You can check that the routes are created with the following command. USD oc -n <control_plane_namespace> get routes Expected output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD gateway1-lvlfn bookinfo.example.com istio-ingressgateway <all> None gateway1-scqhv www.bookinfo.com istio-ingressgateway <all> None If the gateway is deleted, Red Hat OpenShift Service Mesh deletes the routes. However, routes created manually are never modified by Red Hat OpenShift Service Mesh. 2.7.5. Understanding service entries A service entry adds an entry to the service registry that Red Hat OpenShift Service Mesh maintains internally. After you add the service entry, the Envoy proxies send traffic to the service as if it is a service in your mesh. Service entries allow you to do the following: Manage traffic for services that run outside of the service mesh. Redirect and forward traffic for external destinations (such as, APIs consumed from the web) or traffic to services in legacy infrastructure. Define retry, timeout, and fault injection policies for external destinations. Run a mesh service in a Virtual Machine (VM) by adding VMs to your mesh. Note Add services from a different cluster to the mesh to configure a multicluster Red Hat OpenShift Service Mesh mesh on Kubernetes. Service entry examples The following example is a mesh-external service entry that adds the ext-resource external dependency to the Red Hat OpenShift Service Mesh service registry: apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS Specify the external resource using the hosts field. You can qualify it fully or use a wildcard prefixed domain name. You can configure virtual services and destination rules to control traffic to a service entry in the same way you configure traffic for any other service in the mesh. For example, the following destination rule configures the traffic route to use mutual TLS to secure the connection to the ext-svc.example.com external service that is configured using the service entry: apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ext-res-dr spec: host: ext-svc.example.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem 2.7.6. Using VirtualServices You can route requests dynamically to multiple versions of a microservice through Red Hat OpenShift Service Mesh with a virtual service. With virtual services, you can: Address multiple application services through a single virtual service. If your mesh uses Kubernetes, for example, you can configure a virtual service to handle all services in a specific namespace. A virtual service enables you to turn a monolithic application into a service consisting of distinct microservices with a seamless consumer experience. Configure traffic rules in combination with gateways to control ingress and egress traffic. 2.7.6.1. Configuring VirtualServices Requests are routed to services within a service mesh with virtual services. Each virtual service consists of a set of routing rules that are evaluated in order. Red Hat OpenShift Service Mesh matches each given request to the virtual service to a specific real destination within the mesh. Without virtual services, Red Hat OpenShift Service Mesh distributes traffic using round-robin load balancing between all service instances. With a virtual service, you can specify traffic behavior for one or more hostnames. Routing rules in the virtual service tell Red Hat OpenShift Service Mesh how to send the traffic for the virtual service to appropriate destinations. Route destinations can be versions of the same service or entirely different services. Procedure Create a YAML file using the following example to route requests to different versions of the Bookinfo sample application service depending on which user connects to the application. Example VirtualService.yaml apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v3 Run the following command to apply VirtualService.yaml , where VirtualService.yaml is the path to the file. USD oc apply -f <VirtualService.yaml> 2.7.6.2. VirtualService configuration reference Parameter Description The hosts field lists the virtual service's destination address to which the routing rules apply. This is the address(es) that are used to send requests to the service. The virtual service hostname can be an IP address, a DNS name, or a short name that resolves to a fully qualified domain name. The http section contains the virtual service's routing rules which describe match conditions and actions for routing HTTP/1.1, HTTP2, and gRPC traffic sent to the destination as specified in the hosts field. A routing rule consists of the destination where you want the traffic to go and any specified match conditions. The first routing rule in the example has a condition that begins with the match field. In this example, this routing applies to all requests from the user jason . Add the headers , end-user , and exact fields to select the appropriate requests. The destination field in the route section specifies the actual destination for traffic that matches this condition. Unlike the virtual service's host, the destination's host must be a real destination that exists in the Red Hat OpenShift Service Mesh service registry. This can be a mesh service with proxies or a non-mesh service added using a service entry. In this example, the hostname is a Kubernetes service name: 2.7.7. Understanding destination rules Destination rules are applied after virtual service routing rules are evaluated, so they apply to the traffic's real destination. Virtual services route traffic to a destination. Destination rules configure what happens to traffic at that destination. By default, Red Hat OpenShift Service Mesh uses a round-robin load balancing policy, where each service instance in the pool gets a request in turn. Red Hat OpenShift Service Mesh also supports the following models, which you can specify in destination rules for requests to a particular service or service subset. Random: Requests are forwarded at random to instances in the pool. Weighted: Requests are forwarded to instances in the pool according to a specific percentage. Least requests: Requests are forwarded to instances with the least number of requests. Destination rule example The following example destination rule configures three different subsets for the my-svc destination service, with different load balancing policies: apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: my-destination-rule spec: host: my-svc trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3 This guide references the Bookinfo sample application to provide examples of routing in an example application. Install the Bookinfo application to learn how these routing examples work. 2.7.8. Bookinfo routing tutorial The Service Mesh Bookinfo sample application consists of four separate microservices, each with multiple versions. After installing the Bookinfo sample application, three different versions of the reviews microservice run concurrently. When you access the Bookinfo app /product page in a browser and refresh several times, sometimes the book review output contains star ratings and other times it does not. Without an explicit default service version to route to, Service Mesh routes requests to all available versions one after the other. This tutorial helps you apply rules that route all traffic to v1 (version 1) of the microservices. Later, you can apply a rule to route traffic based on the value of an HTTP request header. Prerequisites: Deploy the Bookinfo sample application to work with the following examples. 2.7.8.1. Applying a virtual service In the following procedure, the virtual service routes all traffic to v1 of each micro-service by applying virtual services that set the default version for the micro-services. Procedure Apply the virtual services. USD oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.2/samples/bookinfo/networking/virtual-service-all-v1.yaml To verify that you applied the virtual services, display the defined routes with the following command: USD oc get virtualservices -o yaml That command returns a resource of kind: VirtualService in YAML format. You have configured Service Mesh to route to the v1 version of the Bookinfo microservices including the reviews service version 1. 2.7.8.2. Testing the new route configuration Test the new configuration by refreshing the /productpage of the Bookinfo application. Procedure Set the value for the GATEWAY_URL parameter. You can use this variable to find the URL for your Bookinfo product page later. In this example, istio-system is the name of the control plane project. export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}') Run the following command to retrieve the URL for the product page. echo "http://USDGATEWAY_URL/productpage" Open the Bookinfo site in your browser. The reviews part of the page displays with no rating stars, no matter how many times you refresh. This is because you configured Service Mesh to route all traffic for the reviews service to the version reviews:v1 and this version of the service does not access the star ratings service. Your service mesh now routes traffic to one version of a service. 2.7.8.3. Route based on user identity Change the route configuration so that all traffic from a specific user is routed to a specific service version. In this case, all traffic from a user named jason will be routed to the service reviews:v2 . Service Mesh does not have any special, built-in understanding of user identity. This example is enabled by the fact that the productpage service adds a custom end-user header to all outbound HTTP requests to the reviews service. Procedure Run the following command to enable user-based routing in the Bookinfo sample application. USD oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.2/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml Run the following command to confirm the rule is created. This command returns all resources of kind: VirtualService in YAML format. USD oc get virtualservice reviews -o yaml On the /productpage of the Bookinfo app, log in as user jason with no password. Refresh the browser. The star ratings appear to each review. Log in as another user (pick any name you want). Refresh the browser. Now the stars are gone. Traffic is now routed to reviews:v1 for all users except Jason. You have successfully configured the Bookinfo sample application to route traffic based on user identity. 2.7.9. Additional resources For more information about configuring an OpenShift Container Platform wildcard policy, see Using wildcard routes . 2.8. Deploying applications on Service Mesh Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . When you deploy an application into the Service Mesh, there are several differences between the behavior of applications in the upstream community version of Istio and the behavior of applications within a Red Hat OpenShift Service Mesh installation. 2.8.1. Prerequisites Review Comparing Red Hat OpenShift Service Mesh and upstream Istio community installations Review Installing Red Hat OpenShift Service Mesh 2.8.2. Creating control plane templates You can create reusable configurations with ServiceMeshControlPlane templates. Individual users can extend the templates they create with their own configurations. Templates can also inherit configuration information from other templates. For example, you can create an accounting control plane for the accounting team and a marketing control plane for the marketing team. If you create a development template and a production template, members of the marketing team and the accounting team can extend the development and production templates with team specific customization. When you configure control plane templates, which follow the same syntax as the ServiceMeshControlPlane , users inherit settings in a hierarchical fashion. The Operator is delivered with a default template with default settings for Red Hat OpenShift Service Mesh. To add custom templates you must create a ConfigMap named smcp-templates in the openshift-operators project and mount the ConfigMap in the Operator container at /usr/local/share/istio-operator/templates . 2.8.2.1. Creating the ConfigMap Follow this procedure to create the ConfigMap. Prerequisites An installed, verified Service Mesh Operator. An account with the cluster-admin role. Location of the Operator deployment. Access to the OpenShift Container Platform Command-line Interface (CLI) also known as oc . Procedure Log in to the OpenShift Container Platform CLI as a cluster administrator. From the CLI, run this command to create the ConfigMap named smcp-templates in the openshift-operators project and replace <templates-directory> with the location of the ServiceMeshControlPlane files on your local disk: USD oc create configmap --from-file=<templates-directory> smcp-templates -n openshift-operators Locate the Operator ClusterServiceVersion name. USD oc get clusterserviceversion -n openshift-operators | grep 'Service Mesh' Example output maistra.v1.0.0 Red Hat OpenShift Service Mesh 1.0.0 Succeeded Edit the Operator cluster service version to instruct the Operator to use the smcp-templates ConfigMap. USD oc edit clusterserviceversion -n openshift-operators maistra.v1.0.0 Add a volume mount and volume to the Operator deployment. deployments: - name: istio-operator spec: template: spec: containers: volumeMounts: - name: discovery-cache mountPath: /home/istio-operator/.kube/cache/discovery - name: smcp-templates mountPath: /usr/local/share/istio-operator/templates/ volumes: - name: discovery-cache emptyDir: medium: Memory - name: smcp-templates configMap: name: smcp-templates ... Save your changes and exit the editor. You can now use the template parameter in the ServiceMeshControlPlane to specify a template. apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: minimal-install spec: template: default 2.8.3. Enabling automatic sidecar injection When deploying an application, you must opt-in to injection by configuring the annotation sidecar.istio.io/inject in spec.template.metadata.annotations to true in the deployment object. Opting in ensures that the sidecar injection does not interfere with other OpenShift Container Platform features such as builder pods used by numerous frameworks within the OpenShift Container Platform ecosystem. Prerequisites Identify the namespaces that are part of your service mesh and the deployments that need automatic sidecar injection. Procedure To find your deployments use the oc get command. USD oc get deployment -n <namespace> For example, to view the deployment file for the 'ratings-v1' microservice in the bookinfo namespace, use the following command to see the resource in YAML format. oc get deployment -n bookinfo ratings-v1 -o yaml Open the application's deployment configuration YAML file in an editor. Add spec.template.metadata.annotations.sidecar.istio/inject to your Deployment YAML and set sidecar.istio.io/inject to true as shown in the following example. Example snippet from bookinfo deployment-ratings-v1.yaml apiVersion: apps/v1 kind: Deployment metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: annotations: sidecar.istio.io/inject: 'true' Save the Deployment configuration file. Add the file back to the project that contains your app. USD oc apply -n <namespace> -f deployment.yaml In this example, bookinfo is the name of the project that contains the ratings-v1 app and deployment-ratings-v1.yaml is the file you edited. USD oc apply -n bookinfo -f deployment-ratings-v1.yaml To verify that the resource uploaded successfully, run the following command. USD oc get deployment -n <namespace> <deploymentName> -o yaml For example, USD oc get deployment -n bookinfo ratings-v1 -o yaml 2.8.4. Setting proxy environment variables through annotations Configuration for the Envoy sidecar proxies is managed by the ServiceMeshControlPlane . You can set environment variables for the sidecar proxy for applications by adding pod annotations to the deployment in the injection-template.yaml file. The environment variables are injected to the sidecar. Example injection-template.yaml apiVersion: apps/v1 kind: Deployment metadata: name: resource spec: replicas: 7 selector: matchLabels: app: resource template: metadata: annotations: sidecar.maistra.io/proxyEnv: "{ \"maistra_test_env\": \"env_value\", \"maistra_test_env_2\": \"env_value_2\" }" Warning You should never include maistra.io/ labels and annotations when creating your own custom resources. These labels and annotations indicate that the resources are generated and managed by the Operator. If you are copying content from an Operator-generated resource when creating your own resources, do not include labels or annotations that start with maistra.io/ . Resources that include these labels or annotations will be overwritten or deleted by the Operator during the reconciliation. 2.8.5. Updating Mixer policy enforcement In versions of Red Hat OpenShift Service Mesh, Mixer's policy enforcement was enabled by default. Mixer policy enforcement is now disabled by default. You must enable it before running policy tasks. Prerequisites Access to the OpenShift Container Platform Command-line Interface (CLI) also known as oc . Note The examples use <istio-system> as the control plane namespace. Replace this value with the namespace where you deployed the Service Mesh Control Plane (SMCP). Procedure Log in to the OpenShift Container Platform CLI. Run this command to check the current Mixer policy enforcement status: USD oc get cm -n <istio-system> istio -o jsonpath='{.data.mesh}' | grep disablePolicyChecks If disablePolicyChecks: true , edit the Service Mesh ConfigMap: USD oc edit cm -n <istio-system> istio Locate disablePolicyChecks: true within the ConfigMap and change the value to false . Save the configuration and exit the editor. Re-check the Mixer policy enforcement status to ensure it is set to false . 2.8.5.1. Setting the correct network policy Service Mesh creates network policies in the Service Mesh control plane and member namespaces to allow traffic between them. Before you deploy, consider the following conditions to ensure the services in your service mesh that were previously exposed through an OpenShift Container Platform route. Traffic into the service mesh must always go through the ingress-gateway for Istio to work properly. Deploy services external to the service mesh in separate namespaces that are not in any service mesh. Non-mesh services that need to be deployed within a service mesh enlisted namespace should label their deployments maistra.io/expose-route: "true" , which ensures OpenShift Container Platform routes to these services still work. 2.8.6. Bookinfo example application The Bookinfo example application allows you to test your Red Hat OpenShift Service Mesh 2.2.3 installation on OpenShift Container Platform. The Bookinfo application displays information about a book, similar to a single catalog entry of an online book store. The application displays a page that describes the book, book details (ISBN, number of pages, and other information), and book reviews. The Bookinfo application consists of these microservices: The productpage microservice calls the details and reviews microservices to populate the page. The details microservice contains book information. The reviews microservice contains book reviews. It also calls the ratings microservice. The ratings microservice contains book ranking information that accompanies a book review. There are three versions of the reviews microservice: Version v1 does not call the ratings Service. Version v2 calls the ratings Service and displays each rating as one to five black stars. Version v3 calls the ratings Service and displays each rating as one to five red stars. 2.8.6.1. Installing the Bookinfo application This tutorial walks you through how to create a sample application by creating a project, deploying the Bookinfo application to that project, and viewing the running application in Service Mesh. Prerequisites: OpenShift Container Platform 4.1 or higher installed. Red Hat OpenShift Service Mesh 2.2.3 installed. Access to the OpenShift CLI ( oc ). An account with the cluster-admin role. Note The Bookinfo sample application cannot be installed on IBM Z and IBM Power Systems. Note The commands in this section assume the Service Mesh control plane project is istio-system . If you installed the control plane in another namespace, edit each command before you run it. Procedure Log in to the OpenShift Container Platform web console as a user with cluster-admin rights. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Click Home Projects . Click Create Project . Enter bookinfo as the Project Name , enter a Display Name , and enter a Description , then click Create . Alternatively, you can run this command from the CLI to create the bookinfo project. USD oc new-project bookinfo Click Operators Installed Operators . Click the Project menu and use the Service Mesh control plane namespace. In this example, use istio-system . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Member Roll tab. If you have already created a Istio Service Mesh Member Roll, click the name, then click the YAML tab to open the YAML editor. If you have not created a ServiceMeshMemberRoll , click Create ServiceMeshMemberRoll . Click Members , then enter the name of your project in the Value field. Click Create to save the updated Service Mesh Member Roll. Or, save the following example to a YAML file. Bookinfo ServiceMeshMemberRoll example servicemeshmemberroll-default.yaml apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - bookinfo Run the following command to upload that file and create the ServiceMeshMemberRoll resource in the istio-system namespace. In this example, istio-system is the name of the Service Mesh control plane project. USD oc create -n istio-system -f servicemeshmemberroll-default.yaml Run the following command to verify the ServiceMeshMemberRoll was created successfully. USD oc get smmr -n istio-system -o wide The installation has finished successfully when the STATUS column is Configured . NAME READY STATUS AGE MEMBERS default 1/1 Configured 70s ["bookinfo"] From the CLI, deploy the Bookinfo application in the `bookinfo` project by applying the bookinfo.yaml file: USD oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.2/samples/bookinfo/platform/kube/bookinfo.yaml You should see output similar to the following: service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created Create the ingress gateway by applying the bookinfo-gateway.yaml file: USD oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.2/samples/bookinfo/networking/bookinfo-gateway.yaml You should see output similar to the following: gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created Set the value for the GATEWAY_URL parameter: USD export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}') 2.8.6.2. Adding default destination rules Before you can use the Bookinfo application, you must first add default destination rules. There are two preconfigured YAML files, depending on whether or not you enabled mutual transport layer security (TLS) authentication. Procedure To add destination rules, run one of the following commands: If you did not enable mutual TLS: USD oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.2/samples/bookinfo/networking/destination-rule-all.yaml If you enabled mutual TLS: USD oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.2/samples/bookinfo/networking/destination-rule-all-mtls.yaml You should see output similar to the following: destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created 2.8.6.3. Verifying the Bookinfo installation To confirm that the sample Bookinfo application was successfully deployed, perform the following steps. Prerequisites Red Hat OpenShift Service Mesh installed. Complete the steps for installing the Bookinfo sample app. Procedure from CLI Log in to the OpenShift Container Platform CLI. Verify that all pods are ready with this command: USD oc get pods -n bookinfo All pods should have a status of Running . You should see output similar to the following: NAME READY STATUS RESTARTS AGE details-v1-55b869668-jh7hb 2/2 Running 0 12m productpage-v1-6fc77ff794-nsl8r 2/2 Running 0 12m ratings-v1-7d7d8d8b56-55scn 2/2 Running 0 12m reviews-v1-868597db96-bdxgq 2/2 Running 0 12m reviews-v2-5b64f47978-cvssp 2/2 Running 0 12m reviews-v3-6dfd49b55b-vcwpf 2/2 Running 0 12m Run the following command to retrieve the URL for the product page: echo "http://USDGATEWAY_URL/productpage" Copy and paste the output in a web browser to verify the Bookinfo product page is deployed. Procedure from Kiali web console Obtain the address for the Kiali web console. Log in to the OpenShift Container Platform web console as a user with cluster-admin rights. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Networking Routes . On the Routes page, select the Service Mesh control plane project, for example istio-system , from the Namespace menu. The Location column displays the linked address for each route. Click the link in the Location column for Kiali. Click Log In With OpenShift . The Kiali Overview screen presents tiles for each project namespace. In Kiali, click Graph . Select bookinfo from the Namespace list, and App graph from the Graph Type list. Click Display idle nodes from the Display menu. This displays nodes that are defined but have not received or sent requests. It can confirm that an application is properly defined, but that no request traffic has been reported. Use the Duration menu to increase the time period to help ensure older traffic is captured. Use the Refresh Rate menu to refresh traffic more or less often, or not at all. Click Services , Workloads or Istio Config to see list views of bookinfo components, and confirm that they are healthy. 2.8.6.4. Removing the Bookinfo application Follow these steps to remove the Bookinfo application. Prerequisites OpenShift Container Platform 4.1 or higher installed. Red Hat OpenShift Service Mesh 2.2.3 installed. Access to the OpenShift CLI ( oc ). 2.8.6.4.1. Delete the Bookinfo project Procedure Log in to the OpenShift Container Platform web console. Click to Home Projects . Click the bookinfo menu , and then click Delete Project . Type bookinfo in the confirmation dialog box, and then click Delete . Alternatively, you can run this command using the CLI to create the bookinfo project. USD oc delete project bookinfo 2.8.6.4.2. Remove the Bookinfo project from the Service Mesh member roll Procedure Log in to the OpenShift Container Platform web console. Click Operators Installed Operators . Click the Project menu and choose istio-system from the list. Click the Istio Service Mesh Member Roll link under Provided APIS for the Red Hat OpenShift Service Mesh Operator. Click the ServiceMeshMemberRoll menu and select Edit Service Mesh Member Roll . Edit the default Service Mesh Member Roll YAML and remove bookinfo from the members list. Alternatively, you can run this command using the CLI to remove the bookinfo project from the ServiceMeshMemberRoll . In this example, istio-system is the name of the Service Mesh control plane project. USD oc -n istio-system patch --type='json' smmr default -p '[{"op": "remove", "path": "/spec/members", "value":["'"bookinfo"'"]}]' Click Save to update Service Mesh Member Roll. 2.8.7. Generating example traces and analyzing trace data Jaeger is an open source distributed tracing system. With Jaeger, you can perform a trace that follows the path of a request through various microservices which make up an application. Jaeger is installed by default as part of the Service Mesh. This tutorial uses Service Mesh and the Bookinfo sample application to demonstrate how you can use Jaeger to perform distributed tracing. Prerequisites: OpenShift Container Platform 4.1 or higher installed. Red Hat OpenShift Service Mesh 2.2.3 installed. Jaeger enabled during the installation. Bookinfo example application installed. Procedure After installing the Bookinfo sample application, send traffic to the mesh. Enter the following command several times. USD curl "http://USDGATEWAY_URL/productpage" This command simulates a user visiting the productpage microservice of the application. In the OpenShift Container Platform console, navigate to Networking Routes and search for the Jaeger route, which is the URL listed under Location . Alternatively, use the CLI to query for details of the route. In this example, istio-system is the Service Mesh control plane namespace: USD export JAEGER_URL=USD(oc get route -n istio-system jaeger -o jsonpath='{.spec.host}') Enter the following command to reveal the URL for the Jaeger console. Paste the result in a browser and navigate to that URL. echo USDJAEGER_URL Log in using the same user name and password as you use to access the OpenShift Container Platform console. In the left pane of the Jaeger dashboard, from the Service menu, select productpage.bookinfo and click Find Traces at the bottom of the pane. A list of traces is displayed. Click one of the traces in the list to open a detailed view of that trace. If you click the first one in the list, which is the most recent trace, you see the details that correspond to the latest refresh of the /productpage . 2.9. Data visualization and observability Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . You can view your application's topology, health and metrics in the Kiali console. If your service is having issues, the Kiali console offers ways to visualize the data flow through your service. You can view insights about the mesh components at different levels, including abstract applications, services, and workloads. It also provides an interactive graph view of your namespace in real time. Before you begin You can observe the data flow through your application if you have an application installed. If you don't have your own application installed, you can see how observability works in Red Hat OpenShift Service Mesh by installing the Bookinfo sample application . 2.9.1. Viewing service mesh data The Kiali operator works with the telemetry data gathered in Red Hat OpenShift Service Mesh to provide graphs and real-time network diagrams of the applications, services, and workloads in your namespace. To access the Kiali console you must have Red Hat OpenShift Service Mesh installed and projects configured for the service mesh. Procedure Use the perspective switcher to switch to the Administrator perspective. Click Home Projects . Click the name of your project. For example, click bookinfo . In the Launcher section, click Kiali . Log in to the Kiali console with the same user name and password that you use to access the OpenShift Container Platform console. When you first log in to the Kiali Console, you see the Overview page which displays all the namespaces in your service mesh that you have permission to view. If you are validating the console installation, there might not be any data to display. 2.9.2. Viewing service mesh data in the Kiali console The Kiali Graph offers a powerful visualization of your mesh traffic. The topology combines real-time request traffic with your Istio configuration information to present immediate insight into the behavior of your service mesh, letting you quickly pinpoint issues. Multiple Graph Types let you visualize traffic as a high-level service topology, a low-level workload topology, or as an application-level topology. There are several graphs to choose from: The App graph shows an aggregate workload for all applications that are labeled the same. The Service graph shows a node for each service in your mesh but excludes all applications and workloads from the graph. It provides a high level view and aggregates all traffic for defined services. The Versioned App graph shows a node for each version of an application. All versions of an application are grouped together. The Workload graph shows a node for each workload in your service mesh. This graph does not require you to use the application and version labels. If your application does not use version labels, use this the graph. Graph nodes are decorated with a variety of information, pointing out various route routing options like virtual services and service entries, as well as special configuration like fault-injection and circuit breakers. It can identify mTLS issues, latency issues, error traffic and more. The Graph is highly configurable, can show traffic animation, and has powerful Find and Hide abilities. Click the Legend button to view information about the shapes, colors, arrows, and badges displayed in the graph. To view a summary of metrics, select any node or edge in the graph to display its metric details in the summary details panel. 2.9.2.1. Changing graph layouts in Kiali The layout for the Kiali graph can render differently depending on your application architecture and the data to display. For example, the number of graph nodes and their interactions can determine how the Kiali graph is rendered. Because it is not possible to create a single layout that renders nicely for every situation, Kiali offers a choice of several different layouts. Prerequisites If you do not have your own application installed, install the Bookinfo sample application. Then generate traffic for the Bookinfo application by entering the following command several times. USD curl "http://USDGATEWAY_URL/productpage" This command simulates a user visiting the productpage microservice of the application. Procedure Launch the Kiali console. Click Log In With OpenShift . In Kiali console, click Graph to view a namespace graph. From the Namespace menu, select your application namespace, for example, bookinfo . To choose a different graph layout, do either or both of the following: Select different graph data groupings from the menu at the top of the graph. App graph Service graph Versioned App graph (default) Workload graph Select a different graph layout from the Legend at the bottom of the graph. Layout default dagre Layout 1 cose-bilkent Layout 2 cola 2.10. Custom resources Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . You can customize your Red Hat OpenShift Service Mesh by modifying the default Service Mesh custom resource or by creating a new custom resource. 2.10.1. Prerequisites An account with the cluster-admin role. Completed the Preparing to install Red Hat OpenShift Service Mesh process. Have installed the operators. 2.10.2. Red Hat OpenShift Service Mesh custom resources Note The istio-system project is used as an example throughout the Service Mesh documentation, but you can use other projects as necessary. A custom resource allows you to extend the API in an Red Hat OpenShift Service Mesh project or cluster. When you deploy Service Mesh it creates a default ServiceMeshControlPlane that you can modify to change the project parameters. The Service Mesh operator extends the API by adding the ServiceMeshControlPlane resource type, which enables you to create ServiceMeshControlPlane objects within projects. By creating a ServiceMeshControlPlane object, you instruct the Operator to install a Service Mesh control plane into the project, configured with the parameters you set in the ServiceMeshControlPlane object. This example ServiceMeshControlPlane definition contains all of the supported parameters and deploys Red Hat OpenShift Service Mesh 1.1.18.2 images based on Red Hat Enterprise Linux (RHEL). Important The 3scale Istio Adapter is deployed and configured in the custom resource file. It also requires a working 3scale account ( SaaS or On-Premises ). Example istio-installation.yaml apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: basic-install spec: istio: global: proxy: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi gateways: istio-egressgateway: autoscaleEnabled: false istio-ingressgateway: autoscaleEnabled: false ior_enabled: false mixer: policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 100m memory: 1G limits: cpu: 500m memory: 4G pilot: autoscaleEnabled: false traceSampling: 100 kiali: enabled: true grafana: enabled: true tracing: enabled: true jaeger: template: all-in-one 2.10.3. ServiceMeshControlPlane parameters The following examples illustrate use of the ServiceMeshControlPlane parameters and the tables provide additional information about supported parameters. Important The resources you configure for Red Hat OpenShift Service Mesh with these parameters, including CPUs, memory, and the number of pods, are based on the configuration of your OpenShift Container Platform cluster. Configure these parameters based on the available resources in your current cluster configuration. 2.10.3.1. Istio global example Here is an example that illustrates the Istio global parameters for the ServiceMeshControlPlane and a description of the available parameters with appropriate values. Note In order for the 3scale Istio Adapter to work, disablePolicyChecks must be false . Example global parameters istio: global: tag: 1.1.0 hub: registry.redhat.io/openshift-service-mesh/ proxy: resources: requests: cpu: 10m memory: 128Mi limits: mtls: enabled: false disablePolicyChecks: true policyCheckFailOpen: false imagePullSecrets: - MyPullSecret Table 2.4. Global parameters Parameter Description Values Default value disablePolicyChecks This parameter enables/disables policy checks. true / false true policyCheckFailOpen This parameter indicates whether traffic is allowed to pass through to the Envoy sidecar when the Mixer policy service cannot be reached. true / false false tag The tag that the Operator uses to pull the Istio images. A valid container image tag. 1.1.0 hub The hub that the Operator uses to pull Istio images. A valid image repository. maistra/ or registry.redhat.io/openshift-service-mesh/ mtls This parameter controls whether to enable/disable Mutual Transport Layer Security (mTLS) between services by default. true / false false imagePullSecrets If access to the registry providing the Istio images is secure, list an imagePullSecret here. redhat-registry-pullsecret OR quay-pullsecret None These parameters are specific to the proxy subset of global parameters. Table 2.5. Proxy parameters Type Parameter Description Values Default value requests cpu The amount of CPU resources requested for Envoy proxy. CPU resources, specified in cores or millicores (for example, 200m, 0.5, 1) based on your environment's configuration. 10m memory The amount of memory requested for Envoy proxy Available memory in bytes(for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration. 128Mi limits cpu The maximum amount of CPU resources requested for Envoy proxy. CPU resources, specified in cores or millicores (for example, 200m, 0.5, 1) based on your environment's configuration. 2000m memory The maximum amount of memory Envoy proxy is permitted to use. Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration. 1024Mi 2.10.3.2. Istio gateway configuration Here is an example that illustrates the Istio gateway parameters for the ServiceMeshControlPlane and a description of the available parameters with appropriate values. Example gateway parameters gateways: egress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1 enabled: true ingress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1 Table 2.6. Istio Gateway parameters Parameter Description Values Default value gateways.egress.runtime.deployment.autoScaling.enabled This parameter enables/disables autoscaling. true / false true gateways.egress.runtime.deployment.autoScaling.minReplicas The minimum number of pods to deploy for the egress gateway based on the autoscaleEnabled setting. A valid number of allocatable pods based on your environment's configuration. 1 gateways.egress.runtime.deployment.autoScaling.maxReplicas The maximum number of pods to deploy for the egress gateway based on the autoscaleEnabled setting. A valid number of allocatable pods based on your environment's configuration. 5 gateways.ingress.runtime.deployment.autoScaling.enabled This parameter enables/disables autoscaling. true / false true gateways.ingress.runtime.deployment.autoScaling.minReplicas The minimum number of pods to deploy for the ingress gateway based on the autoscaleEnabled setting. A valid number of allocatable pods based on your environment's configuration. 1 gateways.ingress.runtime.deployment.autoScaling.maxReplicas The maximum number of pods to deploy for the ingress gateway based on the autoscaleEnabled setting. A valid number of allocatable pods based on your environment's configuration. 5 Cluster administrators can refer to Using wildcard routes for instructions on how to enable subdomains. 2.10.3.3. Istio Mixer configuration Here is an example that illustrates the Mixer parameters for the ServiceMeshControlPlane and a description of the available parameters with appropriate values. Example mixer parameters mixer: enabled: true policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 10m memory: 128Mi limits: Table 2.7. Istio Mixer policy parameters Parameter Description Values Default value enabled This parameter enables/disables Mixer. true / false true autoscaleEnabled This parameter enables/disables autoscaling. Disable this for small environments. true / false true autoscaleMin The minimum number of pods to deploy based on the autoscaleEnabled setting. A valid number of allocatable pods based on your environment's configuration. 1 autoscaleMax The maximum number of pods to deploy based on the autoscaleEnabled setting. A valid number of allocatable pods based on your environment's configuration. 5 Table 2.8. Istio Mixer telemetry parameters Type Parameter Description Values Default requests cpu The percentage of CPU resources requested for Mixer telemetry. CPU resources in millicores based on your environment's configuration. 10m memory The amount of memory requested for Mixer telemetry. Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration. 128Mi limits cpu The maximum percentage of CPU resources Mixer telemetry is permitted to use. CPU resources in millicores based on your environment's configuration. 4800m memory The maximum amount of memory Mixer telemetry is permitted to use. Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration. 4G 2.10.3.4. Istio Pilot configuration You can configure Pilot to schedule or set limits on resource allocation. The following example illustrates the Pilot parameters for the ServiceMeshControlPlane and a description of the available parameters with appropriate values. Example pilot parameters spec: runtime: components: pilot: deployment: autoScaling: enabled: true minReplicas: 1 maxReplicas: 5 targetCPUUtilizationPercentage: 85 pod: tolerations: - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 60 affinity: podAntiAffinity: requiredDuringScheduling: - key: istio topologyKey: kubernetes.io/hostname operator: In values: - pilot container: resources: limits: cpu: 100m memory: 128M Table 2.9. Istio Pilot parameters Parameter Description Values Default value cpu The percentage of CPU resources requested for Pilot. CPU resources in millicores based on your environment's configuration. 10m memory The amount of memory requested for Pilot. Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration. 128Mi autoscaleEnabled This parameter enables/disables autoscaling. Disable this for small environments. true / false true traceSampling This value controls how often random sampling occurs. Note: Increase for development or testing. A valid percentage. 1.0 2.10.4. Configuring Kiali When the Service Mesh Operator creates the ServiceMeshControlPlane it also processes the Kiali resource. The Kiali Operator then uses this object when creating Kiali instances. The default Kiali parameters specified in the ServiceMeshControlPlane are as follows: Example Kiali parameters apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: kiali: enabled: true dashboard: viewOnlyMode: false ingress: enabled: true Table 2.10. Kiali parameters Parameter Description Values Default value This parameter enables/disables Kiali. Kiali is enabled by default. true / false true This parameter enables/disables view-only mode for the Kiali console. When view-only mode is enabled, users cannot use the console to make changes to the Service Mesh. true / false false This parameter enables/disables ingress for Kiali. true / false true 2.10.4.1. Configuring Kiali for Grafana When you install Kiali and Grafana as part of Red Hat OpenShift Service Mesh the Operator configures the following by default: Grafana is enabled as an external service for Kiali Grafana authorization for the Kiali console Grafana URL for the Kiali console Kiali can automatically detect the Grafana URL. However if you have a custom Grafana installation that is not easily auto-detectable by Kiali, you must update the URL value in the ServiceMeshControlPlane resource. Additional Grafana parameters spec: kiali: enabled: true dashboard: viewOnlyMode: false grafanaURL: "https://grafana-istio-system.127.0.0.1.nip.io" ingress: enabled: true 2.10.4.2. Configuring Kiali for Jaeger When you install Kiali and Jaeger as part of Red Hat OpenShift Service Mesh the Operator configures the following by default: Jaeger is enabled as an external service for Kiali Jaeger authorization for the Kiali console Jaeger URL for the Kiali console Kiali can automatically detect the Jaeger URL. However if you have a custom Jaeger installation that is not easily auto-detectable by Kiali, you must update the URL value in the ServiceMeshControlPlane resource. Additional Jaeger parameters spec: kiali: enabled: true dashboard: viewOnlyMode: false jaegerURL: "http://jaeger-query-istio-system.127.0.0.1.nip.io" ingress: enabled: true 2.10.5. Configuring Jaeger When the Service Mesh Operator creates the ServiceMeshControlPlane resource it can also create the resources for distributed tracing. Service Mesh uses Jaeger for distributed tracing. You can specify your Jaeger configuration in either of two ways: Configure Jaeger in the ServiceMeshControlPlane resource. There are some limitations with this approach. Configure Jaeger in a custom Jaeger resource and then reference that Jaeger instance in the ServiceMeshControlPlane resource. If a Jaeger resource matching the value of name exists, the control plane will use the existing installation. This approach lets you fully customize your Jaeger configuration. The default Jaeger parameters specified in the ServiceMeshControlPlane are as follows: Default all-in-one Jaeger parameters apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: version: v1.1 istio: tracing: enabled: true jaeger: template: all-in-one Table 2.11. Jaeger parameters Parameter Description Values Default value This parameter enables/disables installing and deploying tracing by the Service Mesh Operator. Installing Jaeger is enabled by default. To use an existing Jaeger deployment, set this value to false . true / false true This parameter specifies which Jaeger deployment strategy to use. all-in-one - For development, testing, demonstrations, and proof of concept. production-elasticsearch - For production use. all-in-one Note The default template in the ServiceMeshControlPlane resource is the all-in-one deployment strategy which uses in-memory storage. For production, the only supported storage option is Elasticsearch, therefore you must configure the ServiceMeshControlPlane to request the production-elasticsearch template when you deploy Service Mesh within a production environment. 2.10.5.1. Configuring Elasticsearch The default Jaeger deployment strategy uses the all-in-one template so that the installation can be completed using minimal resources. However, because the all-in-one template uses in-memory storage, it is only recommended for development, demo, or testing purposes and should NOT be used for production environments. If you are deploying Service Mesh and Jaeger in a production environment you must change the template to the production-elasticsearch template, which uses Elasticsearch for Jaeger's storage needs. Elasticsearch is a memory intensive application. The initial set of nodes specified in the default OpenShift Container Platform installation may not be large enough to support the Elasticsearch cluster. You should modify the default Elasticsearch configuration to match your use case and the resources you have requested for your OpenShift Container Platform installation. You can adjust both the CPU and memory limits for each component by modifying the resources block with valid CPU and memory values. Additional nodes must be added to the cluster if you want to run with the recommended amount (or more) of memory. Ensure that you do not exceed the resources requested for your OpenShift Container Platform installation. Default "production" Jaeger parameters with Elasticsearch apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: "1" memory: "16Gi" limits: cpu: "1" memory: "16Gi" Table 2.12. Elasticsearch parameters Parameter Description Values Default Value Examples This parameter enables/disables tracing in Service Mesh. Jaeger is installed by default. true / false true This parameter enables/disables ingress for Jaeger. true / false true This parameter specifies which Jaeger deployment strategy to use. all-in-one / production-elasticsearch all-in-one Number of Elasticsearch nodes to create. Integer value. 1 Proof of concept = 1, Minimum deployment =3 Number of central processing units for requests, based on your environment's configuration. Specified in cores or millicores (for example, 200m, 0.5, 1). 1Gi Proof of concept = 500m, Minimum deployment =1 Available memory for requests, based on your environment's configuration. Specified in bytes (for example, 200Ki, 50Mi, 5Gi). 500m Proof of concept = 1Gi, Minimum deployment = 16Gi* Limit on number of central processing units, based on your environment's configuration. Specified in cores or millicores (for example, 200m, 0.5, 1). Proof of concept = 500m, Minimum deployment =1 Available memory limit based on your environment's configuration. Specified in bytes (for example, 200Ki, 50Mi, 5Gi). Proof of concept = 1Gi, Minimum deployment = 16Gi* * Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments. For production use, you should have no less than 16Gi allocated to each pod by default, but preferably allocate as much as you can, up to 64Gi per pod. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Navigate to Operators Installed Operators . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Control Plane tab. Click the name of your control plane file, for example, basic-install . Click the YAML tab. Edit the Jaeger parameters, replacing the default all-in-one template with parameters for the production-elasticsearch template, modified for your use case. Ensure that the indentation is correct. Click Save . Click Reload . OpenShift Container Platform redeploys Jaeger and creates the Elasticsearch resources based on the specified parameters. 2.10.5.2. Connecting to an existing Jaeger instance In order for the SMCP to connect to an existing Jaeger instance, the following must be true: The Jaeger instance is deployed in the same namespace as the control plane, for example, into the istio-system namespace. To enable secure communication between services, you should enable the oauth-proxy, which secures communication to your Jaeger instance, and make sure the secret is mounted into your Jaeger instance so Kiali can communicate with it. To use a custom or already existing Jaeger instance, set spec.istio.tracing.enabled to "false" to disable the deployment of a Jaeger instance. Supply the correct jaeger-collector endpoint to Mixer by setting spec.istio.global.tracer.zipkin.address to the hostname and port of your jaeger-collector service. The hostname of the service is usually <jaeger-instance-name>-collector.<namespace>.svc.cluster.local . Supply the correct jaeger-query endpoint to Kiali for gathering traces by setting spec.istio.kiali.jaegerInClusterURL to the hostname of your jaeger-query service - the port is normally not required, as it uses 443 by default. The hostname of the service is usually <jaeger-instance-name>-query.<namespace>.svc.cluster.local . Supply the dashboard URL of your Jaeger instance to Kiali to enable accessing Jaeger through the Kiali console. You can retrieve the URL from the OpenShift route that is created by the Jaeger Operator. If your Jaeger resource is called external-jaeger and resides in the istio-system project, you can retrieve the route using the following command: USD oc get route -n istio-system external-jaeger Example output NAME HOST/PORT PATH SERVICES [...] external-jaeger external-jaeger-istio-system.apps.test external-jaeger-query [...] The value under HOST/PORT is the externally accessible URL of the Jaeger dashboard. Example Jaeger resource apiVersion: jaegertracing.io/v1 kind: "Jaeger" metadata: name: "external-jaeger" # Deploy to the Control Plane Namespace namespace: istio-system spec: # Set Up Authentication ingress: enabled: true security: oauth-proxy openshift: # This limits user access to the Jaeger instance to users who have access # to the control plane namespace. Make sure to set the correct namespace here sar: '{"namespace": "istio-system", "resource": "pods", "verb": "get"}' htpasswdFile: /etc/proxy/htpasswd/auth volumeMounts: - name: secret-htpasswd mountPath: /etc/proxy/htpasswd volumes: - name: secret-htpasswd secret: secretName: htpasswd The following ServiceMeshControlPlane example assumes that you have deployed Jaeger using the Jaeger Operator and the example Jaeger resource. Example ServiceMeshControlPlane with external Jaeger apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: external-jaeger namespace: istio-system spec: version: v1.1 istio: tracing: # Disable Jaeger deployment by service mesh operator enabled: false global: tracer: zipkin: # Set Endpoint for Trace Collection address: external-jaeger-collector.istio-system.svc.cluster.local:9411 kiali: # Set Jaeger dashboard URL dashboard: jaegerURL: https://external-jaeger-istio-system.apps.test # Set Endpoint for Trace Querying jaegerInClusterURL: external-jaeger-query.istio-system.svc.cluster.local 2.10.5.3. Configuring Elasticsearch The default Jaeger deployment strategy uses the all-in-one template so that the installation can be completed using minimal resources. However, because the all-in-one template uses in-memory storage, it is only recommended for development, demo, or testing purposes and should NOT be used for production environments. If you are deploying Service Mesh and Jaeger in a production environment you must change the template to the production-elasticsearch template, which uses Elasticsearch for Jaeger's storage needs. Elasticsearch is a memory intensive application. The initial set of nodes specified in the default OpenShift Container Platform installation may not be large enough to support the Elasticsearch cluster. You should modify the default Elasticsearch configuration to match your use case and the resources you have requested for your OpenShift Container Platform installation. You can adjust both the CPU and memory limits for each component by modifying the resources block with valid CPU and memory values. Additional nodes must be added to the cluster if you want to run with the recommended amount (or more) of memory. Ensure that you do not exceed the resources requested for your OpenShift Container Platform installation. Default "production" Jaeger parameters with Elasticsearch apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: "1" memory: "16Gi" limits: cpu: "1" memory: "16Gi" Table 2.13. Elasticsearch parameters Parameter Description Values Default Value Examples This parameter enables/disables tracing in Service Mesh. Jaeger is installed by default. true / false true This parameter enables/disables ingress for Jaeger. true / false true This parameter specifies which Jaeger deployment strategy to use. all-in-one / production-elasticsearch all-in-one Number of Elasticsearch nodes to create. Integer value. 1 Proof of concept = 1, Minimum deployment =3 Number of central processing units for requests, based on your environment's configuration. Specified in cores or millicores (for example, 200m, 0.5, 1). 1Gi Proof of concept = 500m, Minimum deployment =1 Available memory for requests, based on your environment's configuration. Specified in bytes (for example, 200Ki, 50Mi, 5Gi). 500m Proof of concept = 1Gi, Minimum deployment = 16Gi* Limit on number of central processing units, based on your environment's configuration. Specified in cores or millicores (for example, 200m, 0.5, 1). Proof of concept = 500m, Minimum deployment =1 Available memory limit based on your environment's configuration. Specified in bytes (for example, 200Ki, 50Mi, 5Gi). Proof of concept = 1Gi, Minimum deployment = 16Gi* * Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments. For production use, you should have no less than 16Gi allocated to each pod by default, but preferably allocate as much as you can, up to 64Gi per pod. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Navigate to Operators Installed Operators . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Control Plane tab. Click the name of your control plane file, for example, basic-install . Click the YAML tab. Edit the Jaeger parameters, replacing the default all-in-one template with parameters for the production-elasticsearch template, modified for your use case. Ensure that the indentation is correct. Click Save . Click Reload . OpenShift Container Platform redeploys Jaeger and creates the Elasticsearch resources based on the specified parameters. 2.10.5.4. Configuring the Elasticsearch index cleaner job When the Service Mesh Operator creates the ServiceMeshControlPlane it also creates the custom resource (CR) for Jaeger. The Red Hat OpenShift distributed tracing platform Operator then uses this CR when creating Jaeger instances. When using Elasticsearch storage, by default a job is created to clean old traces from it. To configure the options for this job, you edit the Jaeger custom resource (CR), to customize it for your use case. The relevant options are listed below. apiVersion: jaegertracing.io/v1 kind: Jaeger spec: strategy: production storage: type: elasticsearch esIndexCleaner: enabled: false numberOfDays: 7 schedule: "55 23 * * *" Table 2.14. Elasticsearch index cleaner parameters Parameter Values Description enabled: true/ false Enable or disable the index cleaner job. numberOfDays: integer value Number of days to wait before deleting an index. schedule: "55 23 * * *" Cron expression for the job to run For more information about configuring Elasticsearch with OpenShift Container Platform, see Configuring the log store . 2.10.6. 3scale configuration The following table explains the parameters for the 3scale Istio Adapter in the ServiceMeshControlPlane resource. Example 3scale parameters spec: addons: 3Scale: enabled: false PARAM_THREESCALE_LISTEN_ADDR: 3333 PARAM_THREESCALE_LOG_LEVEL: info PARAM_THREESCALE_LOG_JSON: true PARAM_THREESCALE_LOG_GRPC: false PARAM_THREESCALE_REPORT_METRICS: true PARAM_THREESCALE_METRICS_PORT: 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS: 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN: false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60 PARAM_USE_CACHED_BACKEND: false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS: 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED: true Table 2.15. 3scale parameters Parameter Description Values Default value enabled Whether to use the 3scale adapter true / false false PARAM_THREESCALE_LISTEN_ADDR Sets the listen address for the gRPC server Valid port number 3333 PARAM_THREESCALE_LOG_LEVEL Sets the minimum log output level. debug , info , warn , error , or none info PARAM_THREESCALE_LOG_JSON Controls whether the log is formatted as JSON true / false true PARAM_THREESCALE_LOG_GRPC Controls whether the log contains gRPC info true / false true PARAM_THREESCALE_REPORT_METRICS Controls whether 3scale system and backend metrics are collected and reported to Prometheus true / false true PARAM_THREESCALE_METRICS_PORT Sets the port that the 3scale /metrics endpoint can be scrapped from Valid port number 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS Time period, in seconds, to wait before purging expired items from the cache Time period in seconds 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS Time period before expiry when cache elements are attempted to be refreshed Time period in seconds 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX Max number of items that can be stored in the cache at any time. Set to 0 to disable caching Valid number 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES The number of times unreachable hosts are retried during a cache update loop Valid number 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN Allow to skip certificate verification when calling 3scale APIs. Enabling this is not recommended. true / false false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS Sets the number of seconds to wait before terminating requests to 3scale System and Backend Time period in seconds 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS Sets the maximum amount of seconds (+/-10% jitter) a connection may exist before it is closed Time period in seconds 60 PARAM_USE_CACHE_BACKEND If true, attempt to create an in-memory apisonator cache for authorization requests true / false false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS If the backend cache is enabled, this sets the interval in seconds for flushing the cache against 3scale Time period in seconds 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED Whenever the backend cache cannot retrieve authorization data, whether to deny (closed) or allow (open) requests true / false true 2.11. Using the 3scale Istio adapter Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . The 3scale Istio Adapter is an optional adapter that allows you to label a service running within the Red Hat OpenShift Service Mesh and integrate that service with the 3scale API Management solution. It is not required for Red Hat OpenShift Service Mesh. 2.11.1. Integrate the 3scale adapter with Red Hat OpenShift Service Mesh You can use these examples to configure requests to your services using the 3scale Istio Adapter. Prerequisites: Red Hat OpenShift Service Mesh version 1.x A working 3scale account ( SaaS or 3scale 2.5 On-Premises ) Enabling backend cache requires 3scale 2.9 or greater Red Hat OpenShift Service Mesh prerequisites Note To configure the 3scale Istio Adapter, refer to Red Hat OpenShift Service Mesh custom resources for instructions on adding adapter parameters to the custom resource file. Note Pay particular attention to the kind: handler resource. You must update this with your 3scale account credentials. You can optionally add a service_id to a handler, but this is kept for backwards compatibility only, since it would render the handler only useful for one service in your 3scale account. If you add service_id to a handler, enabling 3scale for other services requires you to create more handlers with different service_ids . Use a single handler per 3scale account by following the steps below: Procedure Create a handler for your 3scale account and specify your account credentials. Omit any service identifier. apiVersion: "config.istio.io/v1alpha2" kind: handler metadata: name: threescale spec: adapter: threescale params: system_url: "https://<organization>-admin.3scale.net/" access_token: "<ACCESS_TOKEN>" connection: address: "threescale-istio-adapter:3333" Optionally, you can provide a backend_url field within the params section to override the URL provided by the 3scale configuration. This may be useful if the adapter runs on the same cluster as the 3scale on-premise instance, and you wish to leverage the internal cluster DNS. Edit or patch the Deployment resource of any services belonging to your 3scale account as follows: Add the "service-mesh.3scale.net/service-id" label with a value corresponding to a valid service_id . Add the "service-mesh.3scale.net/credentials" label with its value being the name of the handler resource from step 1. Do step 2 to link it to your 3scale account credentials and to its service identifier, whenever you intend to add more services. Modify the rule configuration with your 3scale configuration to dispatch the rule to the threescale handler. Rule configuration example apiVersion: "config.istio.io/v1alpha2" kind: rule metadata: name: threescale spec: match: destination.labels["service-mesh.3scale.net"] == "true" actions: - handler: threescale.handler instances: - threescale-authorization.instance 2.11.1.1. Generating 3scale custom resources The adapter includes a tool that allows you to generate the handler , instance , and rule custom resources. Table 2.16. Usage Option Description Required Default value -h, --help Produces help output for available options No --name Unique name for this URL, token pair Yes -n, --namespace Namespace to generate templates No istio-system -t, --token 3scale access token Yes -u, --url 3scale Admin Portal URL Yes --backend-url 3scale backend URL. If set, it overrides the value that is read from system configuration No -s, --service 3scale API/Service ID No --auth 3scale authentication pattern to specify (1=API Key, 2=App Id/App Key, 3=OIDC) No Hybrid -o, --output File to save produced manifests to No Standard output --version Outputs the CLI version and exits immediately No 2.11.1.1.1. Generate templates from URL examples Note Run the following commands via oc exec from the 3scale adapter container image in Generating manifests from a deployed adapter . Use the 3scale-config-gen command to help avoid YAML syntax and indentation errors. You can omit the --service if you use the annotations. This command must be invoked from within the container image via oc exec . Procedure Use the 3scale-config-gen command to autogenerate templates files allowing the token, URL pair to be shared by multiple services as a single handler: The following example generates the templates with the service ID embedded in the handler: Additional resources Tokens . 2.11.1.2. Generating manifests from a deployed adapter Note NAME is an identifier you use to identify with the service you are managing with 3scale. The CREDENTIALS_NAME reference is an identifier that corresponds to the match section in the rule configuration. This is automatically set to the NAME identifier if you are using the CLI tool. Its value does not need to be anything specific: the label value should just match the contents of the rule. See Routing service traffic through the adapter for more information. Run this command to generate manifests from a deployed adapter in the istio-system namespace: This will produce sample output to the terminal. Edit these samples if required and create the objects using the oc create command. When the request reaches the adapter, the adapter needs to know how the service maps to an API on 3scale. You can provide this information in two ways: Label the workload (recommended) Hard code the handler as service_id Update the workload with the required annotations: Note You only need to update the service ID provided in this example if it is not already embedded in the handler. The setting in the handler takes precedence . 2.11.1.3. Routing service traffic through the adapter Follow these steps to drive traffic for your service through the 3scale adapter. Prerequisites Credentials and service ID from your 3scale administrator. Procedure Match the rule destination.labels["service-mesh.3scale.net/credentials"] == "threescale" that you previously created in the configuration, in the kind: rule resource. Add the above label to PodTemplateSpec on the Deployment of the target workload to integrate a service. the value, threescale , refers to the name of the generated handler. This handler stores the access token required to call 3scale. Add the destination.labels["service-mesh.3scale.net/service-id"] == "replace-me" label to the workload to pass the service ID to the adapter via the instance at request time. 2.11.2. Configure the integration settings in 3scale Follow this procedure to configure the 3scale integration settings. Note For 3scale SaaS customers, Red Hat OpenShift Service Mesh is enabled as part of the Early Access program. Procedure Navigate to [your_API_name] Integration Click Settings . Select the Istio option under Deployment . The API Key (user_key) option under Authentication is selected by default. Click Update Product to save your selection. Click Configuration . Click Update Configuration . 2.11.3. Caching behavior Responses from 3scale System APIs are cached by default within the adapter. Entries will be purged from the cache when they become older than the cacheTTLSeconds value. Also by default, automatic refreshing of cached entries will be attempted seconds before they expire, based on the cacheRefreshSeconds value. You can disable automatic refreshing by setting this value higher than the cacheTTLSeconds value. Caching can be disabled entirely by setting cacheEntriesMax to a non-positive value. By using the refreshing process, cached values whose hosts become unreachable will be retried before eventually being purged when past their expiry. 2.11.4. Authenticating requests This release supports the following authentication methods: Standard API Keys : single randomized strings or hashes acting as an identifier and a secret token. Application identifier and key pairs : immutable identifier and mutable secret key strings. OpenID authentication method : client ID string parsed from the JSON Web Token. 2.11.4.1. Applying authentication patterns Modify the instance custom resource, as illustrated in the following authentication method examples, to configure authentication behavior. You can accept the authentication credentials from: Request headers Request parameters Both request headers and query parameters Note When specifying values from headers, they must be lower case. For example, if you want to send a header as User-Key , this must be referenced in the configuration as request.headers["user-key"] . 2.11.4.1.1. API key authentication method Service Mesh looks for the API key in query parameters and request headers as specified in the user option in the subject custom resource parameter. It checks the values in the order given in the custom resource file. You can restrict the search for the API key to either query parameters or request headers by omitting the unwanted option. In this example, Service Mesh looks for the API key in the user_key query parameter. If the API key is not in the query parameter, Service Mesh then checks the user-key header. API key authentication method example apiVersion: "config.istio.io/v1alpha2" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: user: request.query_params["user_key"] | request.headers["user-key"] | "" action: path: request.url_path method: request.method | "get" If you want the adapter to examine a different query parameter or request header, change the name as appropriate. For example, to check for the API key in a query parameter named "key", change request.query_params["user_key"] to request.query_params["key"] . 2.11.4.1.2. Application ID and application key pair authentication method Service Mesh looks for the application ID and application key in query parameters and request headers, as specified in the properties option in the subject custom resource parameter. The application key is optional. It checks the values in the order given in the custom resource file. You can restrict the search for the credentials to either query parameters or request headers by not including the unwanted option. In this example, Service Mesh looks for the application ID and application key in the query parameters first, moving on to the request headers if needed. Application ID and application key pair authentication method example apiVersion: "config.istio.io/v1alpha2" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: app_id: request.query_params["app_id"] | request.headers["app-id"] | "" app_key: request.query_params["app_key"] | request.headers["app-key"] | "" action: path: request.url_path method: request.method | "get" If you want the adapter to examine a different query parameter or request header, change the name as appropriate. For example, to check for the application ID in a query parameter named identification , change request.query_params["app_id"] to request.query_params["identification"] . 2.11.4.1.3. OpenID authentication method To use the OpenID Connect (OIDC) authentication method , use the properties value on the subject field to set client_id , and optionally app_key . You can manipulate this object using the methods described previously. In the example configuration shown below, the client identifier (application ID) is parsed from the JSON Web Token (JWT) under the label azp . You can modify this as needed. OpenID authentication method example apiVersion: "config.istio.io/v1alpha2" kind: instance metadata: name: threescale-authorization spec: template: threescale-authorization params: subject: properties: app_key: request.query_params["app_key"] | request.headers["app-key"] | "" client_id: request.auth.claims["azp"] | "" action: path: request.url_path method: request.method | "get" service: destination.labels["service-mesh.3scale.net/service-id"] | "" For this integration to work correctly, OIDC must still be done in 3scale for the client to be created in the identity provider (IdP). You should create a Request authorization for the service you want to protect in the same namespace as that service. The JWT is passed in the Authorization header of the request. In the sample RequestAuthentication defined below, replace issuer , jwksUri , and selector as appropriate. OpenID Policy example apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs 2.11.4.1.4. Hybrid authentication method You can choose to not enforce a particular authentication method and accept any valid credentials for either method. If both an API key and an application ID/application key pair are provided, Service Mesh uses the API key. In this example, Service Mesh checks for an API key in the query parameters, then the request headers. If there is no API key, it then checks for an application ID and key in the query parameters, then the request headers. Hybrid authentication method example apiVersion: "config.istio.io/v1alpha2" kind: instance metadata: name: threescale-authorization spec: template: authorization params: subject: user: request.query_params["user_key"] | request.headers["user-key"] | properties: app_id: request.query_params["app_id"] | request.headers["app-id"] | "" app_key: request.query_params["app_key"] | request.headers["app-key"] | "" client_id: request.auth.claims["azp"] | "" action: path: request.url_path method: request.method | "get" service: destination.labels["service-mesh.3scale.net/service-id"] | "" 2.11.5. 3scale Adapter metrics The adapter, by default reports various Prometheus metrics that are exposed on port 8080 at the /metrics endpoint. These metrics provide insight into how the interactions between the adapter and 3scale are performing. The service is labeled to be automatically discovered and scraped by Prometheus. 2.11.6. 3scale Istio adapter verification You might want to check whether the 3scale Istio adapter is working as expected. If your adapter is not working, use the following steps to help troubleshoot the problem. Procedure Ensure the 3scale-adapter pod is running in the Service Mesh control plane namespace: USD oc get pods -n <istio-system> Check that the 3scale-adapter pod has printed out information about itself booting up, such as its version: USD oc logs <istio-system> When performing requests to the services protected by the 3scale adapter integration, always try requests that lack the right credentials and ensure they fail. Check the 3scale adapter logs to gather additional information. Additional resources Inspecting pod and container logs . 2.11.7. 3scale Istio adapter troubleshooting checklist As the administrator installing the 3scale Istio adapter, there are a number of scenarios that might be causing your integration to not function properly. Use the following list to troubleshoot your installation: Incorrect YAML indentation. Missing YAML sections. Forgot to apply the changes in the YAML to the cluster. Forgot to label the service workloads with the service-mesh.3scale.net/credentials key. Forgot to label the service workloads with service-mesh.3scale.net/service-id when using handlers that do not contain a service_id so they are reusable per account. The Rule custom resource points to the wrong handler or instance custom resources, or the references lack the corresponding namespace suffix. The Rule custom resource match section cannot possibly match the service you are configuring, or it points to a destination workload that is not currently running or does not exist. Wrong access token or URL for the 3scale Admin Portal in the handler. The Instance custom resource's params/subject/properties section fails to list the right parameters for app_id , app_key , or client_id , either because they specify the wrong location such as the query parameters, headers, and authorization claims, or the parameter names do not match the requests used for testing. Failing to use the configuration generator without realizing that it actually lives in the adapter container image and needs oc exec to invoke it. 2.12. Removing Service Mesh Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . To remove Red Hat OpenShift Service Mesh from an existing OpenShift Container Platform instance, remove the control plane before removing the operators. 2.12.1. Removing the Red Hat OpenShift Service Mesh control plane To uninstall Service Mesh from an existing OpenShift Container Platform instance, first you delete the Service Mesh control plane and the Operators. Then, you run commands to remove residual resources. 2.12.1.1. Removing the Service Mesh control plane using the web console You can remove the Red Hat OpenShift Service Mesh control plane by using the web console. Procedure Log in to the OpenShift Container Platform web console. Click the Project menu and select the project where you installed the Service Mesh control plane, for example istio-system . Navigate to Operators Installed Operators . Click Service Mesh Control Plane under Provided APIs . Click the ServiceMeshControlPlane menu . Click Delete Service Mesh Control Plane . Click Delete on the confirmation dialog window to remove the ServiceMeshControlPlane . 2.12.1.2. Removing the Service Mesh control plane using the CLI You can remove the Red Hat OpenShift Service Mesh control plane by using the CLI. In this example, istio-system is the name of the control plane project. Procedure Log in to the OpenShift Container Platform CLI. Run the following command to delete the ServiceMeshMemberRoll resource. USD oc delete smmr -n istio-system default Run this command to retrieve the name of the installed ServiceMeshControlPlane : USD oc get smcp -n istio-system Replace <name_of_custom_resource> with the output from the command, and run this command to remove the custom resource: USD oc delete smcp -n istio-system <name_of_custom_resource> 2.12.2. Removing the installed Operators You must remove the Operators to successfully remove Red Hat OpenShift Service Mesh. After you remove the Red Hat OpenShift Service Mesh Operator, you must remove the Kiali Operator, the Red Hat OpenShift distributed tracing platform Operator, and the OpenShift Elasticsearch Operator. 2.12.2.1. Removing the Operators Follow this procedure to remove the Operators that make up Red Hat OpenShift Service Mesh. Repeat the steps for each of the following Operators. Red Hat OpenShift Service Mesh Kiali Red Hat OpenShift distributed tracing platform OpenShift Elasticsearch Procedure Log in to the OpenShift Container Platform web console. From the Operators Installed Operators page, scroll or type a keyword into the Filter by name to find each Operator. Then, click the Operator name. On the the Operator Details page, select Uninstall Operator from the Actions menu. Follow the prompts to uninstall each Operator. 2.12.2.2. Clean up Operator resources Follow this procedure to manually remove resources left behind after removing the Red Hat OpenShift Service Mesh Operator using the OpenShift Container Platform web console. Prerequisites An account with cluster administration access. Access to the OpenShift Container Platform Command-line Interface (CLI) also known as oc . Procedure Log in to the OpenShift Container Platform CLI as a cluster administrator. Run the following commands to clean up resources after uninstalling the Operators. If you intend to keep using Jaeger as a stand alone service without service mesh, do not delete the Jaeger resources. Note The Operators are installed in the openshift-operators namespace by default. If you installed the Operators in another namespace, replace openshift-operators with the name of the project where the Red Hat OpenShift Service Mesh Operator was installed. USD oc delete validatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io USD oc delete mutatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io USD oc delete -n openshift-operators daemonset/istio-node USD oc delete clusterrole/istio-admin clusterrole/istio-cni clusterrolebinding/istio-cni USD oc delete clusterrole istio-view istio-edit USD oc delete clusterrole jaegers.jaegertracing.io-v1-admin jaegers.jaegertracing.io-v1-crdview jaegers.jaegertracing.io-v1-edit jaegers.jaegertracing.io-v1-view USD oc get crds -o name | grep '.*\.istio\.io' | xargs -r -n 1 oc delete USD oc get crds -o name | grep '.*\.maistra\.io' | xargs -r -n 1 oc delete USD oc get crds -o name | grep '.*\.kiali\.io' | xargs -r -n 1 oc delete USD oc delete crds jaegers.jaegertracing.io USD oc delete svc admission-controller -n <operator-project> USD oc delete project <istio-system-project>
[ "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.0", "oc adm must-gather -- /usr/bin/gather_audit_logs", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s", "oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8", "oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8 gather <namespace>", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: [\"dev\"] to: - operation: hosts: [\"httpbin.com\",\"httpbin.com:*\"]", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: default spec: action: DENY rules: - to: - operation: hosts: [\"httpbin.example.com:*\"]", "spec: global: pathNormalization: <option>", "{ \"runtime\": { \"symlink_root\": \"/var/lib/istio/envoy/runtime\" } }", "oc create secret generic -n <SMCPnamespace> gateway-bootstrap --from-file=bootstrap-override.json", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap", "oc create secret generic -n <SMCPnamespace> gateway-settings --from-literal=overload.global_downstream_max_connections=10000", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: template: default #Change the version to \"v1.0\" if you are on the 1.0 stream. version: v1.1 istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap # below is the new secret mount - mountPath: /var/lib/istio/envoy/runtime name: gateway-settings secretName: gateway-settings", "oc get jaeger -n istio-system", "NAME AGE jaeger 3d21h", "oc get jaeger jaeger -oyaml -n istio-system > /tmp/jaeger-cr.yaml", "oc delete jaeger jaeger -n istio-system", "oc create -f /tmp/jaeger-cr.yaml -n istio-system", "rm /tmp/jaeger-cr.yaml", "oc delete -f <jaeger-cr-file>", "oc delete -f jaeger-prod-elasticsearch.yaml", "oc create -f <jaeger-cr-file>", "oc get pods -n jaeger-system -w", "spec: version: v1.1", "{\"level\":\"warn\",\"ts\":1642438880.918793,\"caller\":\"channelz/logging.go:62\",\"msg\":\"[core]grpc: Server.Serve failed to create ServerTransport: connection error: desc = \\\"transport: http2Server.HandleStreams received bogus greeting from client: \\\\\\\"\\\\\\\\x16\\\\\\\\x03\\\\\\\\x01\\\\\\\\x02\\\\\\\\x00\\\\\\\\x01\\\\\\\\x00\\\\\\\\x01\\\\\\\\xfc\\\\\\\\x03\\\\\\\\x03vw\\\\\\\\x1a\\\\\\\\xc9T\\\\\\\\xe7\\\\\\\\xdaCj\\\\\\\\xb7\\\\\\\\x8dK\\\\\\\\xa6\\\\\\\"\\\"\",\"system\":\"grpc\",\"grpc_log\":true}", "apiVersion: \"rbac.istio.io/v1alpha1\" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: \"cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account\" properties: request.headers[<header>]: \"value\"", "apiVersion: \"rbac.istio.io/v1alpha1\" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: \"cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account\" properties: request.regex.headers[<header>]: \"<regular expression>\"", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc new-project istio-system", "oc create -n istio-system -f istio-installation.yaml", "oc get smcp -n istio-system", "NAME READY STATUS PROFILES VERSION AGE basic-install 11/11 ComponentsReady [\"default\"] v1.1.18 4m25s", "oc get pods -n istio-system -w", "NAME READY STATUS RESTARTS AGE grafana-7bf5764d9d-2b2f6 2/2 Running 0 28h istio-citadel-576b9c5bbd-z84z4 1/1 Running 0 28h istio-egressgateway-5476bc4656-r4zdv 1/1 Running 0 28h istio-galley-7d57b47bb7-lqdxv 1/1 Running 0 28h istio-ingressgateway-dbb8f7f46-ct6n5 1/1 Running 0 28h istio-pilot-546bf69578-ccg5x 2/2 Running 0 28h istio-policy-77fd498655-7pvjw 2/2 Running 0 28h istio-sidecar-injector-df45bd899-ctxdt 1/1 Running 0 28h istio-telemetry-66f697d6d5-cj28l 2/2 Running 0 28h jaeger-896945cbc-7lqrr 2/2 Running 0 11h kiali-78d9c5b87c-snjzh 1/1 Running 0 22h prometheus-6dff867c97-gr2n5 2/2 Running 0 28h", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc new-project <your-project>", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name", "oc create -n istio-system -f servicemeshmemberroll-default.yaml", "oc get smmr -n istio-system default", "oc edit smmr -n <controlplane-namespace>", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name", "oc patch deployment/<deployment> -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/restartedAt\": \"'`date -Iseconds`'\"}}}}}'", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true", "apiVersion: \"authentication.istio.io/v1alpha1\" kind: \"Policy\" metadata: name: default namespace: <NAMESPACE> spec: peers: - mtls: {}", "apiVersion: \"networking.istio.io/v1alpha3\" kind: \"DestinationRule\" metadata: name: \"default\" namespace: <CONTROL_PLANE_NAMESPACE>> spec: host: \"*.local\" trafficPolicy: tls: mode: ISTIO_MUTUAL", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: tls: minProtocolVersion: TLSv1_2 maxProtocolVersion: TLSv1_3", "oc create secret generic cacerts -n istio-system --from-file=<path>/ca-cert.pem --from-file=<path>/ca-key.pem --from-file=<path>/root-cert.pem --from-file=<path>/cert-chain.pem", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: false", "oc delete secret istio.default", "RATINGSPOD=`oc get pods -l app=ratings -o jsonpath='{.items[0].metadata.name}'`", "oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/root-cert.pem > /tmp/pod-root-cert.pem", "oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/cert-chain.pem > /tmp/pod-cert-chain.pem", "openssl x509 -in <path>/root-cert.pem -text -noout > /tmp/root-cert.crt.txt", "openssl x509 -in /tmp/pod-root-cert.pem -text -noout > /tmp/pod-root-cert.crt.txt", "diff /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt", "sed '0,/^-----END CERTIFICATE-----/d' /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-ca.pem", "openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt", "openssl x509 -in /tmp/pod-cert-chain-ca.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt", "diff /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt", "head -n 21 /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-workload.pem", "openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) /tmp/pod-cert-chain-workload.pem", "/tmp/pod-cert-chain-workload.pem: OK", "oc delete secret cacerts -n istio-system", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: true", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ext-host-gwy spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 443 name: https protocol: HTTPS hosts: - ext-host.example.com tls: mode: SIMPLE serverCertificate: /tmp/tls.crt privateKey: /tmp/tls.key", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtual-svc spec: hosts: - ext-host.example.com gateways: - ext-host-gwy", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - \"*\"", "oc apply -f gateway.yaml", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - \"*\" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080", "oc apply -f vs.yaml", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "export TARGET_PORT=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.port.targetPort}')", "curl -s -I \"USDGATEWAY_URL/productpage\"", "oc get svc istio-ingressgateway -n istio-system", "export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')", "export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].port}')", "export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].port}')", "export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].port}')", "export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')", "export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].nodePort}')", "export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].nodePort}')", "export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].nodePort}')", "spec: istio: gateways: istio-egressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 istio-ingressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 ior_enabled: true", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway1 spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - www.bookinfo.com - bookinfo.example.com", "oc -n <control_plane_namespace> get routes", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD gateway1-lvlfn bookinfo.example.com istio-ingressgateway <all> None gateway1-scqhv www.bookinfo.com istio-ingressgateway <all> None", "apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ext-res-dr spec: host: ext-svc.example.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v3", "oc apply -f <VirtualService.yaml>", "spec: hosts:", "spec: http: - match:", "spec: http: - match: - destination:", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: my-destination-rule spec: host: my-svc trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3", "oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.2/samples/bookinfo/networking/virtual-service-all-v1.yaml", "oc get virtualservices -o yaml", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "echo \"http://USDGATEWAY_URL/productpage\"", "oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.2/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml", "oc get virtualservice reviews -o yaml", "oc create configmap --from-file=<templates-directory> smcp-templates -n openshift-operators", "oc get clusterserviceversion -n openshift-operators | grep 'Service Mesh'", "maistra.v1.0.0 Red Hat OpenShift Service Mesh 1.0.0 Succeeded", "oc edit clusterserviceversion -n openshift-operators maistra.v1.0.0", "deployments: - name: istio-operator spec: template: spec: containers: volumeMounts: - name: discovery-cache mountPath: /home/istio-operator/.kube/cache/discovery - name: smcp-templates mountPath: /usr/local/share/istio-operator/templates/ volumes: - name: discovery-cache emptyDir: medium: Memory - name: smcp-templates configMap: name: smcp-templates", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: minimal-install spec: template: default", "oc get deployment -n <namespace>", "get deployment -n bookinfo ratings-v1 -o yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: annotations: sidecar.istio.io/inject: 'true'", "oc apply -n <namespace> -f deployment.yaml", "oc apply -n bookinfo -f deployment-ratings-v1.yaml", "oc get deployment -n <namespace> <deploymentName> -o yaml", "oc get deployment -n bookinfo ratings-v1 -o yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: resource spec: replicas: 7 selector: matchLabels: app: resource template: metadata: annotations: sidecar.maistra.io/proxyEnv: \"{ \\\"maistra_test_env\\\": \\\"env_value\\\", \\\"maistra_test_env_2\\\": \\\"env_value_2\\\" }\"", "oc get cm -n <istio-system> istio -o jsonpath='{.data.mesh}' | grep disablePolicyChecks", "oc edit cm -n <istio-system> istio", "oc new-project bookinfo", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - bookinfo", "oc create -n istio-system -f servicemeshmemberroll-default.yaml", "oc get smmr -n istio-system -o wide", "NAME READY STATUS AGE MEMBERS default 1/1 Configured 70s [\"bookinfo\"]", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.2/samples/bookinfo/platform/kube/bookinfo.yaml", "service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.2/samples/bookinfo/networking/bookinfo-gateway.yaml", "gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.2/samples/bookinfo/networking/destination-rule-all.yaml", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.2/samples/bookinfo/networking/destination-rule-all-mtls.yaml", "destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created", "oc get pods -n bookinfo", "NAME READY STATUS RESTARTS AGE details-v1-55b869668-jh7hb 2/2 Running 0 12m productpage-v1-6fc77ff794-nsl8r 2/2 Running 0 12m ratings-v1-7d7d8d8b56-55scn 2/2 Running 0 12m reviews-v1-868597db96-bdxgq 2/2 Running 0 12m reviews-v2-5b64f47978-cvssp 2/2 Running 0 12m reviews-v3-6dfd49b55b-vcwpf 2/2 Running 0 12m", "echo \"http://USDGATEWAY_URL/productpage\"", "oc delete project bookinfo", "oc -n istio-system patch --type='json' smmr default -p '[{\"op\": \"remove\", \"path\": \"/spec/members\", \"value\":[\"'\"bookinfo\"'\"]}]'", "curl \"http://USDGATEWAY_URL/productpage\"", "export JAEGER_URL=USD(oc get route -n istio-system jaeger -o jsonpath='{.spec.host}')", "echo USDJAEGER_URL", "curl \"http://USDGATEWAY_URL/productpage\"", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: basic-install spec: istio: global: proxy: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi gateways: istio-egressgateway: autoscaleEnabled: false istio-ingressgateway: autoscaleEnabled: false ior_enabled: false mixer: policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 100m memory: 1G limits: cpu: 500m memory: 4G pilot: autoscaleEnabled: false traceSampling: 100 kiali: enabled: true grafana: enabled: true tracing: enabled: true jaeger: template: all-in-one", "istio: global: tag: 1.1.0 hub: registry.redhat.io/openshift-service-mesh/ proxy: resources: requests: cpu: 10m memory: 128Mi limits: mtls: enabled: false disablePolicyChecks: true policyCheckFailOpen: false imagePullSecrets: - MyPullSecret", "gateways: egress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1 enabled: true ingress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1", "mixer: enabled: true policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 10m memory: 128Mi limits:", "spec: runtime: components: pilot: deployment: autoScaling: enabled: true minReplicas: 1 maxReplicas: 5 targetCPUUtilizationPercentage: 85 pod: tolerations: - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 60 affinity: podAntiAffinity: requiredDuringScheduling: - key: istio topologyKey: kubernetes.io/hostname operator: In values: - pilot container: resources: limits: cpu: 100m memory: 128M", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: kiali: enabled: true dashboard: viewOnlyMode: false ingress: enabled: true", "enabled", "dashboard viewOnlyMode", "ingress enabled", "spec: kiali: enabled: true dashboard: viewOnlyMode: false grafanaURL: \"https://grafana-istio-system.127.0.0.1.nip.io\" ingress: enabled: true", "spec: kiali: enabled: true dashboard: viewOnlyMode: false jaegerURL: \"http://jaeger-query-istio-system.127.0.0.1.nip.io\" ingress: enabled: true", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: version: v1.1 istio: tracing: enabled: true jaeger: template: all-in-one", "tracing: enabled:", "jaeger: template:", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: \"1\" memory: \"16Gi\" limits: cpu: \"1\" memory: \"16Gi\"", "tracing: enabled:", "ingress: enabled:", "jaeger: template:", "elasticsearch: nodeCount:", "requests: cpu:", "requests: memory:", "limits: cpu:", "limits: memory:", "oc get route -n istio-system external-jaeger", "NAME HOST/PORT PATH SERVICES [...] external-jaeger external-jaeger-istio-system.apps.test external-jaeger-query [...]", "apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"external-jaeger\" # Deploy to the Control Plane Namespace namespace: istio-system spec: # Set Up Authentication ingress: enabled: true security: oauth-proxy openshift: # This limits user access to the Jaeger instance to users who have access # to the control plane namespace. Make sure to set the correct namespace here sar: '{\"namespace\": \"istio-system\", \"resource\": \"pods\", \"verb\": \"get\"}' htpasswdFile: /etc/proxy/htpasswd/auth volumeMounts: - name: secret-htpasswd mountPath: /etc/proxy/htpasswd volumes: - name: secret-htpasswd secret: secretName: htpasswd", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: external-jaeger namespace: istio-system spec: version: v1.1 istio: tracing: # Disable Jaeger deployment by service mesh operator enabled: false global: tracer: zipkin: # Set Endpoint for Trace Collection address: external-jaeger-collector.istio-system.svc.cluster.local:9411 kiali: # Set Jaeger dashboard URL dashboard: jaegerURL: https://external-jaeger-istio-system.apps.test # Set Endpoint for Trace Querying jaegerInClusterURL: external-jaeger-query.istio-system.svc.cluster.local", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: \"1\" memory: \"16Gi\" limits: cpu: \"1\" memory: \"16Gi\"", "tracing: enabled:", "ingress: enabled:", "jaeger: template:", "elasticsearch: nodeCount:", "requests: cpu:", "requests: memory:", "limits: cpu:", "limits: memory:", "apiVersion: jaegertracing.io/v1 kind: Jaeger spec: strategy: production storage: type: elasticsearch esIndexCleaner: enabled: false numberOfDays: 7 schedule: \"55 23 * * *\"", "spec: addons: 3Scale: enabled: false PARAM_THREESCALE_LISTEN_ADDR: 3333 PARAM_THREESCALE_LOG_LEVEL: info PARAM_THREESCALE_LOG_JSON: true PARAM_THREESCALE_LOG_GRPC: false PARAM_THREESCALE_REPORT_METRICS: true PARAM_THREESCALE_METRICS_PORT: 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS: 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN: false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60 PARAM_USE_CACHED_BACKEND: false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS: 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED: true", "apiVersion: \"config.istio.io/v1alpha2\" kind: handler metadata: name: threescale spec: adapter: threescale params: system_url: \"https://<organization>-admin.3scale.net/\" access_token: \"<ACCESS_TOKEN>\" connection: address: \"threescale-istio-adapter:3333\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: rule metadata: name: threescale spec: match: destination.labels[\"service-mesh.3scale.net\"] == \"true\" actions: - handler: threescale.handler instances: - threescale-authorization.instance", "3scale-config-gen --name=admin-credentials --url=\"https://<organization>-admin.3scale.net:443\" --token=\"[redacted]\"", "3scale-config-gen --url=\"https://<organization>-admin.3scale.net\" --name=\"my-unique-id\" --service=\"123456789\" --token=\"[redacted]\"", "export NS=\"istio-system\" URL=\"https://replaceme-admin.3scale.net:443\" NAME=\"name\" TOKEN=\"token\" exec -n USD{NS} USD(oc get po -n USD{NS} -o jsonpath='{.items[?(@.metadata.labels.app==\"3scale-istio-adapter\")].metadata.name}') -it -- ./3scale-config-gen --url USD{URL} --name USD{NAME} --token USD{TOKEN} -n USD{NS}", "export CREDENTIALS_NAME=\"replace-me\" export SERVICE_ID=\"replace-me\" export DEPLOYMENT=\"replace-me\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" --template='{\"spec\":{\"template\":{\"metadata\":{\"labels\":{ {{ range USDk,USDv := .spec.template.metadata.labels }}\"{{ USDk }}\":\"{{ USDv }}\",{{ end }}\"service-mesh.3scale.net/service-id\":\"'\"USD{SERVICE_ID}\"'\",\"service-mesh.3scale.net/credentials\":\"'\"USD{CREDENTIALS_NAME}\"'\"}}}}}' )\" patch deployment \"USD{DEPLOYMENT}\" --patch ''\"USD{patch}\"''", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: threescale-authorization params: subject: properties: app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"", "apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | properties: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"", "oc get pods -n <istio-system>", "oc logs <istio-system>", "oc delete smmr -n istio-system default", "oc get smcp -n istio-system", "oc delete smcp -n istio-system <name_of_custom_resource>", "oc delete validatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io", "oc delete mutatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io", "oc delete -n openshift-operators daemonset/istio-node", "oc delete clusterrole/istio-admin clusterrole/istio-cni clusterrolebinding/istio-cni", "oc delete clusterrole istio-view istio-edit", "oc delete clusterrole jaegers.jaegertracing.io-v1-admin jaegers.jaegertracing.io-v1-crdview jaegers.jaegertracing.io-v1-edit jaegers.jaegertracing.io-v1-view", "oc get crds -o name | grep '.*\\.istio\\.io' | xargs -r -n 1 oc delete", "oc get crds -o name | grep '.*\\.maistra\\.io' | xargs -r -n 1 oc delete", "oc get crds -o name | grep '.*\\.kiali\\.io' | xargs -r -n 1 oc delete", "oc delete crds jaegers.jaegertracing.io", "oc delete svc admission-controller -n <operator-project>", "oc delete project <istio-system-project>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/service_mesh/service-mesh-1-x
Chapter 1. About Pipelines as Code
Chapter 1. About Pipelines as Code With Pipelines as Code, cluster administrators and users with the required privileges can define pipeline templates as part of source code Git repositories. When triggered by a source code push or a pull request for the configured Git repository, Pipelines as Code runs the pipeline and reports the status. 1.1. Key features Pipelines as Code supports the following features: Pull request status and control on the platform hosting the Git repository. GitHub Checks API to set the status of a pipeline run, including rechecks. GitHub pull request and commit events. Pull request actions in comments, such as /retest . Git events filtering and a separate pipeline for each event. Automatic task resolution in OpenShift Pipelines, including local tasks, Tekton Hub, and remote URLs. Retrieval of configurations using GitHub blobs and objects API. Access Control List (ACL) over a GitHub organization or using a Prow style OWNERS file. The tkn pac CLI plugin for managing bootstrapping and Pipelines as Code repositories. Support for GitHub App, GitHub Webhook, Bitbucket Data Center, and Bitbucket Cloud.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.18/html/pipelines_as_code/about-pipelines-as-code
Chapter 11. Tuning a Red Hat OpenStack Platform environment
Chapter 11. Tuning a Red Hat OpenStack Platform environment 11.1. Pinning emulator threads Emulator threads handle interrupt requests and non-blocking processes for virtual machine hardware emulation. These threads float across the CPUs that the guest uses for processing. If threads used for the poll mode driver (PMD) or real-time processing run on these guest CPUs, you can experience packet loss or missed deadlines. You can separate emulator threads from VM processing tasks by pinning the threads to their own guest CPUs, increasing performance as a result. To improve performance, reserve a subset of host CPUs for hosting emulator threads. Procedure Deploy an overcloud with NovaComputeCpuSharedSet defined for a given role. The value of NovaComputeCpuSharedSet applies to the cpu_shared_set parameter in the nova.conf file for hosts within that role. Create a flavor to build instances with emulator threads separated into a shared pool. Add the hw:emulator_threads_policy extra specification, and set the value to share . Instances created with this flavor will use the instance CPUs defined in the cpu_share_set parameter in the nova.conf file. Note You must set the cpu_share_set parameter in the nova.conf file to enable the share policy for this extra specification. You should use heat for this preferably, as editing nova.conf manually might not persist across redeployments. Verification Identify the host and name for a given instance. Use SSH to log on to the identified host as tripleo-admin. 11.2. Configuring trust between virtual and physical functions You can configure trust between physical functions (PFs) and virtual functions (VFs), so that VFs can perform privileged actions, such as enabling promiscuous mode, or modifying a hardware address. Prerequisites An operational installation of Red Hat OpenStack Platform including director Procedure Complete the following steps to configure and deploy the overcloud with trust between physical and virtual functions: Add the NeutronPhysicalDevMappings parameter in the parameter_defaults section to link between the logical network name and the physical interface. Add the new property, trusted , to the SR-IOV parameters. Note You must include double quotation marks around the value "true". 11.3. Utilizing trusted VF networks Create a network of type vlan . Create a subnet. Create a port. Set the vnic-type option to direct , and the binding-profile option to true . Create an instance, and bind it to the previously-created trusted port. Verification Confirm the trusted VF configuration on the hypervisor: On the compute node that you created the instance, enter the following command: Verify that the trust status of the VF is trust on . The example output contains details of an environment that contains two ports. Note that vf 6 contains the text trust on . You can disable spoof checking if you set port_security_enabled: false in the Networking service (neutron) network, or if you include the argument --disable-port-security when you run the openstack port create command. 11.4. Preventing packet loss by managing RX-TX queue size You can experience packet loss at high packet rates above 3.5 million packets per second (mpps) for many reasons, such as: a network interrupt a SMI packet processing latency in the Virtual Network Function To prevent packet loss, increase the queue size from the default of 512 to a maximum of 1024. Prerequisites Access to the undercloud host and credentials for the stack user. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Create a custom environment YAML file and under parameter_defaults add the following definitions to increase the RX and TX queue size: Run the deployment command and include the core heat templates, other environment files, the environment file that contains your RX and TX queue size changes: Example Verification Observe the values for RX queue size and TX queue size in the nova.conf file. You should see the following: Check the values for RX queue size and TX queue size in the VM instance XML file generated by libvirt on the Compute host: Create a new instance. Obtain the Compute host and and instance name: Sample output You should see output similar to the following: Log into the Compute host and dump the instance definition. Example Sample output You should see output similar to the following: 11.5. Configuring a NUMA-aware vSwitch Important This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . Before you implement a NUMA-aware vSwitch, examine the following components of your hardware configuration: The number of physical networks. The placement of PCI cards. The physical architecture of the servers. Memory-mapped I/O (MMIO) devices, such as PCIe NICs, are associated with specific NUMA nodes. When a VM and the NIC are on different NUMA nodes, there is a significant decrease in performance. To increase performance, align PCIe NIC placement and instance processing on the same NUMA node. Use this feature to ensure that instances that share a physical network are located on the same NUMA node. To optimize utilization of datacenter hardware, you must use multiple physnets. Warning To configure NUMA-aware networks for optimal server utilization, you must understand the mapping of the PCIe slot and the NUMA node. For detailed information on your specific hardware, refer to your vendor's documentation. If you fail to plan or implement your NUMA-aware vSwitch correctly, you can cause the servers to use only a single NUMA node. To prevent a cross-NUMA configuration, place the VM on the correct NUMA node, by providing the location of the NIC to Nova. Prerequisites You have enabled the filter NUMATopologyFilter . Procedure Set a new NeutronPhysnetNUMANodesMapping parameter to map the physical network to the NUMA node that you associate with the physical network. If you use tunnels, such as VxLAN or GRE, you must also set the NeutronTunnelNUMANodes parameter. Example Here is an example with two physical networks tunneled to NUMA node 0: one project network associated with NUMA node 0 one management network without any affinity In this example, assign the physnet of the device named eno2 to NUMA number 0. Observe the physnet settings in the example heat template: Verification Follow these steps to test your NUMA-aware vSwitch: Observe the configuration in the file /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf : Confirm the new configuration with the lscpu command: Launch a VM with the NIC attached to the appropriate network. Additional resources Discovering your NUMA node topology Section 11.6, "Known limitations for NUMA-aware vSwitches" 11.6. Known limitations for NUMA-aware vSwitches Important This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . This section lists the constraints for implementing a NUMA-aware vSwitch in a Red Hat OpenStack Platform (RHOSP) network functions virtualization infrastructure (NFVi). You cannot start a VM that has two NICs connected to physnets on different NUMA nodes, if you did not specify a two-node guest NUMA topology. You cannot start a VM that has one NIC connected to a physnet and another NIC connected to a tunneled network on different NUMA nodes, if you did not specify a two-node guest NUMA topology. You cannot start a VM that has one vhost port and one VF on different NUMA nodes, if you did not specify a two-node guest NUMA topology. NUMA-aware vSwitch parameters are specific to overcloud roles. For example, Compute node 1 and Compute node 2 can have different NUMA topologies. If the interfaces of a VM have NUMA affinity, ensure that the affinity is for a single NUMA node only. You can locate any interface without NUMA affinity on any NUMA node. Configure NUMA affinity for data plane networks, not management networks. NUMA affinity for tunneled networks is a global setting that applies to all VMs. 11.7. Quality of Service (QoS) in NFVi environments You can offer varying service levels for VM instances by using quality of service (QoS) policies to apply rate limits to egress and ingress traffic on Red Hat OpenStack Platform (RHOSP) networks in a network functions virtualization infrastructure (NFVi). In NFVi environments, QoS support is limited to the following rule types: minimum bandwidth on SR-IOV, if supported by vendor. bandwidth limit on SR-IOV and OVS-DPDK egress interfaces. Additional resources Configuring Quality of Service (QoS) policies 11.8. Creating an HCI overcloud that uses DPDK You can deploy your NFV infrastructure with hyperconverged nodes, by co-locating and configuring Compute and Ceph Storage services for optimized resource usage. For more information about hyper-converged infrastructure (HCI), see Deploying a hyperconverged infrastructure . The sections that follow provide examples of various configurations. 11.8.1. Example NUMA node configuration For increased performance, place the tenant network and Ceph object service daemon (OSD)s in one NUMA node, such as NUMA-0, and the VNF and any non-NFV VMs in another NUMA node, such as NUMA-1. CPU allocation: NUMA-0 NUMA-1 Number of Ceph OSDs * 4 HT Guest vCPU for the VNF and non-NFV VMs DPDK lcore - 2 HT DPDK lcore - 2 HT DPDK PMD - 2 HT DPDK PMD - 2 HT Example of CPU allocation: NUMA-0 NUMA-1 Ceph OSD 32,34,36,38,40,42,76,78,80,82,84,86 DPDK-lcore 0,44 1,45 DPDK-pmd 2,46 3,47 nova 5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87 11.8.2. Example Ceph configuration file This section describes a sample Red Hat Ceph Storage configuration file. You can model your configuration file on this one, by substituting values that are appropriate for your Red Hat OpenStack Platform environment. [osd] osd_numa_node = 0 # 1 osd_memory_target_autotune = true # 2 [mgr] mgr/cephadm/autotune_memory_target_ratio = 0.2 # 3 Assign CPU resources for Ceph Object Storage Daemons (OSDs) processes with the following parameters. The values shown here are examples. Adjust the values as appropriate based on your workload and hardware. 1 osd_numa_node : sets the affinity of Ceph processes to a NUMA node, for example, 0 for NUMA-0 , 1 for NUMA-1 , and so on. -1 sets the affinity to no NUMA node. In this example, osd_numa_node is set to NUMA-0 . As shown in Section 11.8.3, "Example DPDK configuration file" , IsolCpusList contains odd numbered CPUs on NUMA-1 , after elements of OvsPmdCoreList are removed. Because the latency-sensitive Compute service (nova) workload is hosted on NUMA-1 , you must isolate the Ceph workload on NUMA-0 . This example assumes that both the disk controllers and network interfaces for the stroage network are on NUMA-0 . 2 osd_memory_target_autotune : when set to true, the OSD daemons adjust their memory consumption based on the osd_memory_target configuration option. 3 autotune_memory_target_ratio : used to allocate memory for OSDs. The default is 0.7 . 70% of the total RAM in the system is the starting point, from which any memory consumed by non-autotuned Ceph daemons are subtracted. When osd_memory_target_autotune is true for all OSDs, the remaining memory is divided by the OSDs. For HCI deployments the mgr/cephadm/autotune_memory_target_ratio can be set to 0.2 so that more memory is available for the Compute service. Adjust as needed to ensure each OSD has at least 5 GB of memory. Additional resources Section 11.8.6, "Deploying the HCI-DPDK overcloud" 11.8.3. Example DPDK configuration file parameter_defaults: ComputeHCIParameters: KernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=240 intel_iommu=on iommu=pt # 1 isolcpus=2,46,3,47,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87" TunedProfileName: "cpu-partitioning" IsolCpusList: # 2 "2,46,3,47,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,49,51, 53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87" VhostuserSocketGroup: hugetlbfs OvsDpdkSocketMemory: "4096,4096" # 3 OvsDpdkMemoryChannels: "4" OvsPmdCoreList: "2,46,3,47" # 4 1 KernelArgs: To calculate hugepages , subtract the value of the NovaReservedHostMemory parameter from total memory. 2 IsolCpusList: Assign a set of CPU cores that you want to isolate from the host processes with this parameter. Add the value of the OvsPmdCoreList parameter to the value of the NovaComputeCpuDedicatedSet parameter to calculate the value for the IsolCpusList parameter. 3 OvsDpdkSocketMemory: Specify the amount of memory in MB to pre-allocate from the hugepage pool per NUMA node with the OvsDpdkSocketMemory parameter. For more information about calculating OVS-DPDK parameters, see OVS-DPDK parameters . 4 OvsPmdCoreList: Specify the CPU cores that are used for the DPDK poll mode drivers (PMD) with this parameter. Choose CPU cores that are associated with the local NUMA nodes of the DPDK interfaces. Allocate 2 HT sibling threads for each NUMA node to calculate the value for the OvsPmdCoreList parameter. 11.8.4. Example nova configuration file parameter_defaults: ComputeHCIExtraConfig: nova::cpu_allocation_ratio: 16 # 2 NovaReservedHugePages: # 1 - node:0,size:1GB,count:4 - node:1,size:1GB,count:4 NovaReservedHostMemory: 123904 # 2 # All left over cpus from NUMA-1 NovaComputeCpuDedicatedSet: # 3 ['5','7','9','11','13','15','17','19','21','23','25','27','29','31','33','35','37','39','41','43','49','51','| 53','55','57','59','61','63','65','67','69','71','73','75','77','79','81','83','85','87 1 NovaReservedHugePages: Pre-allocate memory in MB from the hugepage pool with the NovaReservedHugePages parameter. It is the same memory total as the value for the OvsDpdkSocketMemory parameter. 2 NovaReservedHostMemory: Reserve memory in MB for tasks on the host with the NovaReservedHostMemory parameter. Use the following guidelines to calculate the amount of memory that you must reserve: 5 GB for each OSD. 0.5 GB overhead for each VM. 4GB for general host processing. Ensure that you allocate sufficient memory to prevent potential performance degradation caused by cross-NUMA OSD operation. 3 NovaComputeCpuDedicatedSet: List the CPUs not found in OvsPmdCoreList , or Ceph_osd_docker_cpuset_cpus with the NovaComputeCpuDedicatedSet parameter. The CPUs must be in the same NUMA node as the DPDK NICs. 11.8.5. Recommended configuration for HCI-DPDK deployments Table 11.1. Tunable parameters for HCI deployments Block Device Type OSDs, Memory, vCPUs per device NVMe Memory : 5GB per OSD OSDs per device: 4 vCPUs per device: 3 SSD Memory : 5GB per OSD OSDs per device: 1 vCPUs per device: 4 HDD Memory : 5GB per OSD OSDs per device: 1 vCPUs per device: 1 Use the same NUMA node for the following functions: Disk controller Storage networks Storage CPU and memory Allocate another NUMA node for the following functions of the DPDK provider network: NIC PMD CPUs Socket memory 11.8.6. Deploying the HCI-DPDK overcloud Follow these steps to deploy a hyperconverged overcloud that uses DPDK. Prerequisites Red Hat OpenStack Platform (RHOSP) 17.1 or later. The latest version of Red Hat Ceph Storage 6.1. Procedure Generate the roles_data.yaml file for the Controller and the ComputeHCIOvsDpdk roles. Create and configure a new flavor with the openstack flavor create and openstack flavor set commands. Deploy Ceph by using RHOSP director and the Ceph configuration file. Example Deploy the overcloud with the custom roles_data.yaml file that you generated. Example Important This example deploys Ceph RBD (block storage) without Ceph RGW (object storage). To include RGW in the deployment, use cephadm.yaml instead of cephadm-rbd-only.yaml . Additional resources Composable services and custom roles in Customizing your Red Hat OpenStack Platform deployment Section 11.8.2, "Example Ceph configuration file" Configuring the Red Hat Ceph Storage cluster in Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director . 11.9. Synchronize your compute nodes with Timemaster Important This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . Use time protocols to maintain a consistent timestamp between systems. Red Hat OpenStack Platform (RHOSP) includes support for Precision Time Protocol (PTP) and Network Time Protocol (NTP). You can use NTP to synchronize clocks in your network in the millisecond range, and you can use PTP to synchronize clocks to a higher, sub-microsecond, accuracy. An example use case for PTP is a virtual radio access network (vRAN) that contains multiple antennas which provide higher throughput with more risk of interference. Timemaster is a program that uses ptp4l and phc2sys in combination with chronyd or ntpd to synchronize the system clock to NTP and PTP time sources. The phc2sys and ptp4l programs use Shared Memory Driver (SHM) reference clocks to send PTP time to chronyd or ntpd , which compares the time sources to synchronize the system clock. The implementation of the PTPv2 protocol in the Red Hat Enterprise Linux (RHEL) kernel is linuxptp . The linuxptp package includes the ptp4l program for PTP boundary clock and ordinary clock synchronization, and the phc2sys program for hardware time stamping. For more information about PTP, see: Introduction to PTP in the Red Hat Enterprise Linux System Administrator's Guide . Chrony is an implementation of the NTP protocol. The two main components of Chrony are chronyd , which is the Chrony daemon, and chronyc which is the Chrony command line interface. For more information about Chrony, see Using the Chrony suite to configure NTP in the Red Hat Enterprise Linux System Administrator's Guide . The following image is an overview of a packet journey in a PTP configuration. Figure 11.1. PTP packet journey overview The following image is a overview of a packet journey in the Compute node in a PTP configuration. Figure 11.2. PTP packet journey detail 11.9.1. Timemaster hardware requirements Ensure that you have the following hardware functionality: You have configured the NICs with hardware timestamping capability. You have configured the switch to allow multicast packets. You have configured the switch to also function as a boundary or transparent clock. You can verify the hardware timestamping with the command ethtool -T <device> . You can use either a transparent or boundary clock switch for better accuracy and less latency. You can use an uplink switch for the boundary clock. The boundary clock switch uses an 8-bit correctionField on the PTPv2 header to correct delay variations, and ensure greater accuracy on the end clock. In a transparent clock switch, the end clock calculates the delay variation, not the correctionField . 11.9.2. Configuring Timemaster The default Red Hat OpenStack Platform (RHOSP) service for time synchronization in overcloud nodes is OS::TripleO::Services::Timesync . Known limitations Enable NTP for virtualized controllers, and enable PTP for bare metal nodes. Virtio interfaces are incompatible, because ptp4l requires a compatible PTP device. Use a physical function (PF) for a VM with SR-IOV. A virtual function (VF) does not expose the registers necessary for PTP, and a VM uses kvm_ptp to calculate time. High Availability (HA) interfaces with multiple sources and multiple network paths are incompatible. Procedure To enable the Timemaster service on the nodes that belong to a role that you choose, replace the line that contains OS::TripleO::Services::Timesync with the line OS::TripleO::Services::TimeMaster in the roles_data.yaml file section for that role. #- OS::TripleO::Services::Timesync - OS::TripleO::Services::TimeMaster Configure the heat parameters for the compute role that you use. #Example ComputeSriovParameters: PTPInterfaces: '0:eno1,1:eno2' PTPMessageTransport: 'UDPv4' Include the new environment file in the openstack overcloud deploy command with any other environment files that are relevant to your environment: Replace <existing_overcloud_environment_files> with the list of environment files that are part of your existing deployment. Replace <new_environment_file> with the new environment file or files that you want to include in the overcloud deployment process. Verification Use the command phc_ctl , installed with ptp4linux , to query the NIC hardware clock. 11.9.3. Example timemaster configuration 11.9.4. Example timemaster operation
[ "parameter_defaults: ComputeOvsDpdkParameters: NovaComputeCpuSharedSet: \"0-1,16-17\" NovaComputeCpuDedicatedSet: \"2-15,18-31\"", "openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <vcpus> <flavor>", "openstack flavor set <flavor> --property hw:emulator_threads_policy=share", "openstack server show <instance_id>", "ssh tripleo-admin@compute-1 [compute-1]USD sudo virsh dumpxml instance-00001 | grep `'emulatorpin cpuset'`", "parameter_defaults: NeutronPhysicalDevMappings: - sriov2:p5p2", "parameter_defaults: NeutronPhysicalDevMappings: - sriov2:p5p2 NovaPCIPassthrough: - vendor_id: \"8086\" product_id: \"1572\" physical_network: \"sriov2\" trusted: \"true\"", "openstack network create trusted_vf_network --provider-network-type vlan --provider-segment 111 --provider-physical-network sriov2 --external --disable-port-security", "openstack subnet create --network trusted_vf_network --ip-version 4 --subnet-range 192.168.111.0/24 --no-dhcp subnet-trusted_vf_network", "openstack port create --network sriov111 --vnic-type direct --binding-profile trusted=true sriov111_port_trusted", "openstack server create --image rhel --flavor dpdk --network internal --port trusted_vf_network_port_trusted --config-drive True --wait rhel-dpdk-sriov_trusted", "ip link 7: p5p2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether b4:96:91:1c:40:fa brd ff:ff:ff:ff:ff:ff vf 6 MAC fa:16:3e:b8:91:c2, vlan 111, spoof checking off, link-state auto, trust on, query_rss off vf 7 MAC fa:16:3e:84:cf:c8, vlan 111, spoof checking off, link-state auto, trust off, query_rss off", "source ~/stackrc", "parameter_defaults: NovaLibvirtRxQueueSize: 1024 NovaLibvirtTxQueueSize: 1024", "openstack overcloud deploy --templates -e <other_environment_files> -e /home/stack/my_tx-rx_queue_sizes.yaml", "egrep \"^[rt]x_queue_size\" /var/lib/config-data/puppet-generated/ nova_libvirt/etc/nova/nova.conf", "rx_queue_size=1024 tx_queue_size=1024", "openstack server show testvm-queue-sizes -c OS-EXT-SRV-ATTR: hypervisor_hostname -c OS-EXT-SRV-ATTR:instance_name", "+-------------------------------------+------------------------------------+ | Field | Value | +-------------------------------------+------------------------------------+ | OS-EXT-SRV-ATTR:hypervisor_hostname | overcloud-novacompute-1.sales | | OS-EXT-SRV-ATTR:instance_name | instance-00000059 | +-------------------------------------+------------------------------------+", "podman exec nova_libvirt virsh dumpxml instance-00000059", "<interface type='vhostuser'> <mac address='56:48:4f:4d:5e:6f'/> <source type='unix' path='/tmp/vhost-user1' mode='server'/> <model type='virtio'/> <driver name='vhost' rx_queue_size='1024' tx_queue_size='1024' /> <address type='pci' domain='0x0000' bus='0x00' slot='0x10' function='0x0'/> </interface>", "parameter_defaults: NeutronPhysnetNUMANodesMapping: {<physnet_name>: [<NUMA_NODE>]} NeutronTunnelNUMANodes: <NUMA_NODE>,<NUMA_NODE>", "parameter_defaults: NeutronBridgeMappings: - tenant:br-link0 NeutronPhysnetNUMANodesMapping: {tenant: [1], mgmt: [0,1]} NeutronTunnelNUMANodes: 0", "ethtool -i eno2 bus-info: 0000:18:00.1 cat /sys/devices/pci0000:16/0000:16:02.0/0000:18:00.1/numa_node 0", "NeutronBridgeMappings: 'physnet1:br-physnet1' NeutronPhysnetNUMANodesMapping: {physnet1: [0] } - type: ovs_user_bridge name: br-physnet1 mtu: 9000 members: - type: ovs_dpdk_port name: dpdk2 members: - type: interface name: eno2", "[neutron_physnet_tenant] numa_nodes=1 [neutron_tunnel] numa_nodes=1", "lscpu", "[osd] osd_numa_node = 0 # 1 osd_memory_target_autotune = true # 2 [mgr] mgr/cephadm/autotune_memory_target_ratio = 0.2 # 3", "parameter_defaults: ComputeHCIParameters: KernelArgs: \"default_hugepagesz=1GB hugepagesz=1G hugepages=240 intel_iommu=on iommu=pt # 1 isolcpus=2,46,3,47,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87\" TunedProfileName: \"cpu-partitioning\" IsolCpusList: # 2 \"2,46,3,47,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,49,51, 53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87\" VhostuserSocketGroup: hugetlbfs OvsDpdkSocketMemory: \"4096,4096\" # 3 OvsDpdkMemoryChannels: \"4\" OvsPmdCoreList: \"2,46,3,47\" # 4", "parameter_defaults: ComputeHCIExtraConfig: nova::cpu_allocation_ratio: 16 # 2 NovaReservedHugePages: # 1 - node:0,size:1GB,count:4 - node:1,size:1GB,count:4 NovaReservedHostMemory: 123904 # 2 # All left over cpus from NUMA-1 NovaComputeCpuDedicatedSet: # 3 ['5','7','9','11','13','15','17','19','21','23','25','27','29','31','33','35','37','39','41','43','49','51','| 53','55','57','59','61','63','65','67','69','71','73','75','77','79','81','83','85','87", "openstack overcloud roles generate -o ~/<templates>/roles_data.yaml Controller ComputeHCIOvsDpdk", "openstack overcloud ceph deploy --config initial-ceph.conf", "openstack overcloud deploy --templates --timeout 360 -r ~/<templates>/roles_data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ cephadm/cephadm-rbd-only.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-ovs-dpdk.yaml -e ~/<templates>/<custom environment file>", "ethtool -T p5p1 Time stamping parameters for p5p1: Capabilities: hardware-transmit (SOF_TIMESTAMPING_TX_HARDWARE) software-transmit (SOF_TIMESTAMPING_TX_SOFTWARE) hardware-receive (SOF_TIMESTAMPING_RX_HARDWARE) software-receive (SOF_TIMESTAMPING_RX_SOFTWARE) software-system-clock (SOF_TIMESTAMPING_SOFTWARE) hardware-raw-clock (SOF_TIMESTAMPING_RAW_HARDWARE) PTP Hardware Clock: 6 Hardware Transmit Timestamp Modes: off (HWTSTAMP_TX_OFF) on (HWTSTAMP_TX_ON) Hardware Receive Filter Modes: none (HWTSTAMP_FILTER_NONE) ptpv1-l4-sync (HWTSTAMP_FILTER_PTP_V1_L4_SYNC) ptpv1-l4-delay-req (HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ) ptpv2-event (HWTSTAMP_FILTER_PTP_V2_EVENT)", "#- OS::TripleO::Services::Timesync - OS::TripleO::Services::TimeMaster", "#Example ComputeSriovParameters: PTPInterfaces: '0:eno1,1:eno2' PTPMessageTransport: 'UDPv4'", "openstack overcloud deploy --templates ... -e <existing_overcloud_environment_files> -e <new_environment_file1> -e <new_environment_file2> ...", "phc_ctl <clock_name> get phc_ctl <clock_name> cmp", "cat /etc/timemaster.conf Configuration file for timemaster #[ntp_server ntp-server.local] #minpoll 4 #maxpoll 4 [ptp_domain 0] interfaces eno1 #ptp4l_setting network_transport l2 #delay 10e-6 [timemaster] ntp_program chronyd include /etc/chrony.conf server clock.redhat.com iburst minpoll 6 maxpoll 10 [ntp.conf] includefile /etc/ntp.conf includefile /etc/ptp4l.conf network_transport L2 [chronyd] path /usr/sbin/chronyd [ntpd] path /usr/sbin/ntpd options -u ntp:ntp -g [phc2sys] path /usr/sbin/phc2sys #options -w [ptp4l] path /usr/sbin/ptp4l #options -2 -i eno1", "systemctl status timemaster ● timemaster.service - Synchronize system clock to NTP and PTP time sources Loaded: loaded (/usr/lib/systemd/system/timemaster.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2020-08-25 19:10:18 UTC; 2min 6s ago Main PID: 2573 (timemaster) Tasks: 6 (limit: 357097) Memory: 5.1M CGroup: /system.slice/timemaster.service β”œβ”€2573 /usr/sbin/timemaster -f /etc/timemaster.conf β”œβ”€2577 /usr/sbin/chronyd -n -f /var/run/timemaster/chrony.conf β”œβ”€2582 /usr/sbin/ptp4l -l 5 -f /var/run/timemaster/ptp4l.0.conf -H -i eno1 β”œβ”€2583 /usr/sbin/phc2sys -l 5 -a -r -R 1.00 -z /var/run/timemaster/ptp4l.0.socket -t [0:eno1] -n 0 -E ntpshm -M 0 β”œβ”€2587 /usr/sbin/ptp4l -l 5 -f /var/run/timemaster/ptp4l.1.conf -H -i eno2 └─2588 /usr/sbin/phc2sys -l 5 -a -r -R 1.00 -z /var/run/timemaster/ptp4l.1.socket -t [0:eno2] -n 0 -E ntpshm -M 1 Aug 25 19:11:53 computesriov-0 ptp4l[2587]: [152.562] [0:eno2] selected local clock e4434b.fffe.4a0c24 as best master" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_network_functions_virtualization/tune-rhosp-nfv-env_rhosp-nfv
Chapter 19. Improving cluster stability in high latency environments using worker latency profiles
Chapter 19. Improving cluster stability in high latency environments using worker latency profiles If the cluster administrator has performed latency tests for platform verification, they can discover the need to adjust the operation of the cluster to ensure stability in cases of high latency. The cluster administrator needs to change only one parameter, recorded in a file, which controls four parameters affecting how supervisory processes read status and interpret the health of the cluster. Changing only the one parameter provides cluster tuning in an easy, supportable manner. The Kubelet process provides the starting point for monitoring cluster health. The Kubelet sets status values for all nodes in the OpenShift Container Platform cluster. The Kubernetes Controller Manager ( kube controller ) reads the status values every 10 seconds, by default. If the kube controller cannot read a node status value, it loses contact with that node after a configured period. The default behavior is: The node controller on the control plane updates the node health to Unhealthy and marks the node Ready condition`Unknown`. In response, the scheduler stops scheduling pods to that node. The Node Lifecycle Controller adds a node.kubernetes.io/unreachable taint with a NoExecute effect to the node and schedules any pods on the node for eviction after five minutes, by default. This behavior can cause problems if your network is prone to latency issues, especially if you have nodes at the network edge. In some cases, the Kubernetes Controller Manager might not receive an update from a healthy node due to network latency. The Kubelet evicts pods from the node even though the node is healthy. To avoid this problem, you can use worker latency profiles to adjust the frequency that the Kubelet and the Kubernetes Controller Manager wait for status updates before taking action. These adjustments help to ensure that your cluster runs properly if network latency between the control plane and the worker nodes is not optimal. These worker latency profiles contain three sets of parameters that are predefined with carefully tuned values to control the reaction of the cluster to increased latency. There is no need to experimentally find the best values manually. You can configure worker latency profiles when installing a cluster or at any time you notice increased latency in your cluster network. 19.1. Understanding worker latency profiles Worker latency profiles are four different categories of carefully-tuned parameters. The four parameters which implement these values are node-status-update-frequency , node-monitor-grace-period , default-not-ready-toleration-seconds and default-unreachable-toleration-seconds . These parameters can use values which allow you to control the reaction of the cluster to latency issues without needing to determine the best values by using manual methods. Important Setting these parameters manually is not supported. Incorrect parameter settings adversely affect cluster stability. All worker latency profiles configure the following parameters: node-status-update-frequency Specifies how often the kubelet posts node status to the API server. node-monitor-grace-period Specifies the amount of time in seconds that the Kubernetes Controller Manager waits for an update from a kubelet before marking the node unhealthy and adding the node.kubernetes.io/not-ready or node.kubernetes.io/unreachable taint to the node. default-not-ready-toleration-seconds Specifies the amount of time in seconds after marking a node unhealthy that the Kube API Server Operator waits before evicting pods from that node. default-unreachable-toleration-seconds Specifies the amount of time in seconds after marking a node unreachable that the Kube API Server Operator waits before evicting pods from that node. The following Operators monitor the changes to the worker latency profiles and respond accordingly: The Machine Config Operator (MCO) updates the node-status-update-frequency parameter on the worker nodes. The Kubernetes Controller Manager updates the node-monitor-grace-period parameter on the control plane nodes. The Kubernetes API Server Operator updates the default-not-ready-toleration-seconds and default-unreachable-toleration-seconds parameters on the control plane nodes. Although the default configuration works in most cases, OpenShift Container Platform offers two other worker latency profiles for situations where the network is experiencing higher latency than usual. The three worker latency profiles are described in the following sections: Default worker latency profile With the Default profile, each Kubelet updates it's status every 10 seconds ( node-status-update-frequency ). The Kube Controller Manager checks the statuses of Kubelet every 5 seconds ( node-monitor-grace-period ). The Kubernetes Controller Manager waits 40 seconds ( node-monitor-grace-period ) for a status update from Kubelet before considering the Kubelet unhealthy. If no status is made available to the Kubernetes Controller Manager, it then marks the node with the node.kubernetes.io/not-ready or node.kubernetes.io/unreachable taint and evicts the pods on that node. If a pod is on a node that has the NoExecute taint, the pod runs according to tolerationSeconds . If the node has no taint, it will be evicted in 300 seconds ( default-not-ready-toleration-seconds and default-unreachable-toleration-seconds settings of the Kube API Server ). Profile Component Parameter Value Default kubelet node-status-update-frequency 10s Kubelet Controller Manager node-monitor-grace-period 40s Kubernetes API Server Operator default-not-ready-toleration-seconds 300s Kubernetes API Server Operator default-unreachable-toleration-seconds 300s Medium worker latency profile Use the MediumUpdateAverageReaction profile if the network latency is slightly higher than usual. The MediumUpdateAverageReaction profile reduces the frequency of kubelet updates to 20 seconds and changes the period that the Kubernetes Controller Manager waits for those updates to 2 minutes. The pod eviction period for a pod on that node is reduced to 60 seconds. If the pod has the tolerationSeconds parameter, the eviction waits for the period specified by that parameter. The Kubernetes Controller Manager waits for 2 minutes to consider a node unhealthy. In another minute, the eviction process starts. Profile Component Parameter Value MediumUpdateAverageReaction kubelet node-status-update-frequency 20s Kubelet Controller Manager node-monitor-grace-period 2m Kubernetes API Server Operator default-not-ready-toleration-seconds 60s Kubernetes API Server Operator default-unreachable-toleration-seconds 60s Low worker latency profile Use the LowUpdateSlowReaction profile if the network latency is extremely high. The LowUpdateSlowReaction profile reduces the frequency of kubelet updates to 1 minute and changes the period that the Kubernetes Controller Manager waits for those updates to 5 minutes. The pod eviction period for a pod on that node is reduced to 60 seconds. If the pod has the tolerationSeconds parameter, the eviction waits for the period specified by that parameter. The Kubernetes Controller Manager waits for 5 minutes to consider a node unhealthy. In another minute, the eviction process starts. Profile Component Parameter Value LowUpdateSlowReaction kubelet node-status-update-frequency 1m Kubelet Controller Manager node-monitor-grace-period 5m Kubernetes API Server Operator default-not-ready-toleration-seconds 60s Kubernetes API Server Operator default-unreachable-toleration-seconds 60s 19.2. Implementing worker latency profiles at cluster creation Important To edit the configuration of the installation program, first use the command openshift-install create manifests to create the default node manifest and other manifest YAML files. This file structure must exist before you can add workerLatencyProfile . The platform on which you are installing might have varying requirements. Refer to the Installing section of the documentation for your specific platform. The workerLatencyProfile must be added to the manifest in the following sequence: Create the manifest needed to build the cluster, using a folder name appropriate for your installation. Create a YAML file to define config.node . The file must be in the manifests directory. When defining workerLatencyProfile in the manifest for the first time, specify any of the profiles at cluster creation time: Default , MediumUpdateAverageReaction or LowUpdateSlowReaction . Verification Here is an example manifest creation showing the spec.workerLatencyProfile Default value in the manifest file: USD openshift-install create manifests --dir=<cluster-install-dir> Edit the manifest and add the value. In this example we use vi to show an example manifest file with the "Default" workerLatencyProfile value added: USD vi <cluster-install-dir>/manifests/config-node-default-profile.yaml Example output apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: workerLatencyProfile: "Default" 19.3. Using and changing worker latency profiles To change a worker latency profile to deal with network latency, edit the node.config object to add the name of the profile. You can change the profile at any time as latency increases or decreases. You must move one worker latency profile at a time. For example, you cannot move directly from the Default profile to the LowUpdateSlowReaction worker latency profile. You must move from the Default worker latency profile to the MediumUpdateAverageReaction profile first, then to LowUpdateSlowReaction . Similarly, when returning to the Default profile, you must move from the low profile to the medium profile first, then to Default . Note You can also configure worker latency profiles upon installing an OpenShift Container Platform cluster. Procedure To move from the default worker latency profile: Move to the medium worker latency profile: Edit the node.config object: USD oc edit nodes.config/cluster Add spec.workerLatencyProfile: MediumUpdateAverageReaction : Example node.config object apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2022-07-08T16:02:51Z" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: "1865" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1 # ... 1 Specifies the medium worker latency policy. Scheduling on each worker node is disabled as the change is being applied. Optional: Move to the low worker latency profile: Edit the node.config object: USD oc edit nodes.config/cluster Change the spec.workerLatencyProfile value to LowUpdateSlowReaction : Example node.config object apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2022-07-08T16:02:51Z" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: "1865" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1 # ... 1 Specifies use of the low worker latency policy. Scheduling on each worker node is disabled as the change is being applied. Verification When all nodes return to the Ready condition, you can use the following command to look in the Kubernetes Controller Manager to ensure it was applied: USD oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5 Example output # ... - lastTransitionTime: "2022-07-11T19:47:10Z" reason: ProfileUpdated status: "False" type: WorkerLatencyProfileProgressing - lastTransitionTime: "2022-07-11T19:47:10Z" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: "True" type: WorkerLatencyProfileComplete - lastTransitionTime: "2022-07-11T19:20:11Z" reason: AsExpected status: "False" type: WorkerLatencyProfileDegraded - lastTransitionTime: "2022-07-11T19:20:36Z" status: "False" # ... 1 Specifies that the profile is applied and active. To change the medium profile to default or change the default to medium, edit the node.config object and set the spec.workerLatencyProfile parameter to the appropriate value. 19.4. Example steps for displaying resulting values of workerLatencyProfile You can display the values in the workerLatencyProfile with the following commands. Verification Check the default-not-ready-toleration-seconds and default-unreachable-toleration-seconds fields output by the Kube API Server: USD oc get KubeAPIServer -o yaml | grep -A 1 default- Example output default-not-ready-toleration-seconds: - "300" default-unreachable-toleration-seconds: - "300" Check the values of the node-monitor-grace-period field from the Kube Controller Manager: USD oc get KubeControllerManager -o yaml | grep -A 1 node-monitor Example output node-monitor-grace-period: - 40s Check the nodeStatusUpdateFrequency value from the Kubelet. Set the directory /host as the root directory within the debug shell. By changing the root directory to /host , you can run binaries contained in the host's executable paths: USD oc debug node/<worker-node-name> USD chroot /host # cat /etc/kubernetes/kubelet.conf|grep nodeStatusUpdateFrequency Example output "nodeStatusUpdateFrequency": "10s" These outputs validate the set of timing variables for the Worker Latency Profile.
[ "openshift-install create manifests --dir=<cluster-install-dir>", "vi <cluster-install-dir>/manifests/config-node-default-profile.yaml", "apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: workerLatencyProfile: \"Default\"", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1", "oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5", "- lastTransitionTime: \"2022-07-11T19:47:10Z\" reason: ProfileUpdated status: \"False\" type: WorkerLatencyProfileProgressing - lastTransitionTime: \"2022-07-11T19:47:10Z\" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: \"True\" type: WorkerLatencyProfileComplete - lastTransitionTime: \"2022-07-11T19:20:11Z\" reason: AsExpected status: \"False\" type: WorkerLatencyProfileDegraded - lastTransitionTime: \"2022-07-11T19:20:36Z\" status: \"False\"", "oc get KubeAPIServer -o yaml | grep -A 1 default-", "default-not-ready-toleration-seconds: - \"300\" default-unreachable-toleration-seconds: - \"300\"", "oc get KubeControllerManager -o yaml | grep -A 1 node-monitor", "node-monitor-grace-period: - 40s", "oc debug node/<worker-node-name> chroot /host cat /etc/kubernetes/kubelet.conf|grep nodeStatusUpdateFrequency", "\"nodeStatusUpdateFrequency\": \"10s\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/scalability_and_performance/scaling-worker-latency-profiles
Chapter 10. Red Hat build of OptaPlanner on Red Hat build of Quarkus: a school timetable quick start guide
Chapter 10. Red Hat build of OptaPlanner on Red Hat build of Quarkus: a school timetable quick start guide This guide walks you through the process of creating a Red Hat build of Quarkus application with Red Hat build of OptaPlanner's constraint solving artificial intelligence (AI). You will build a REST application that optimizes a school timetable for students and teachers Your service will assign Lesson instances to Timeslot and Room instances automatically by using AI to adhere to the following hard and soft scheduling constraints : A room can have at most one lesson at the same time. A teacher can teach at most one lesson at the same time. A student can attend at most one lesson at the same time. A teacher prefers to teach in a single room. A teacher prefers to teach sequential lessons and dislikes gaps between lessons. Mathematically speaking, school timetabling is an NP-hard problem. That means it is difficult to scale. Simply iterating through all possible combinations with brute force would take millions of years for a non-trivial data set, even on a supercomputer. Fortunately, AI constraint solvers such as Red Hat build of OptaPlanner have advanced algorithms that deliver a near-optimal solution in a reasonable amount of time. What is considered to be a reasonable amount of time is subjective and depends on the goals of your problem. Prerequisites OpenJDK 11 or later is installed. Red Hat build of Open JDK is available from the Software Downloads page in the Red Hat Customer Portal (login required). Apache Maven 3.6 or higher is installed. Maven is available from the Apache Maven Project website. An IDE, such as IntelliJ IDEA, VS Code, Eclipse, or NetBeans is available. 10.1. Creating an OptaPlanner Red Hat build of Quarkus Maven project using the Maven plug-in You can get up and running with a Red Hat build of OptaPlanner and Quarkus application using Apache Maven and the Quarkus Maven plug-in. Prerequisites OpenJDK 11 or later is installed. Red Hat build of Open JDK is available from the Software Downloads page in the Red Hat Customer Portal (login required). Apache Maven 3.6 or higher is installed. Maven is available from the Apache Maven Project website. Procedure In a command terminal, enter the following command to verify that Maven is using JDK 11 and that the Maven version is 3.6 or higher: If the preceding command does not return JDK 11, add the path to JDK 11 to the PATH environment variable and enter the preceding command again. To generate a Quarkus OptaPlanner quickstart project, enter the following command: mvn com.redhat.quarkus.platform:quarkus-maven-plugin:2.13.Final-redhat-00006:create \ -DprojectGroupId=com.example \ -DprojectArtifactId=optaplanner-quickstart \ -Dextensions="resteasy,resteasy-jackson,optaplanner-quarkus,optaplanner-quarkus-jackson" \ -DplatformGroupId=com.redhat.quarkus.platform -DplatformVersion=2.13.Final-redhat-00006 \ -DnoExamples This command create the following elements in the ./optaplanner-quickstart directory: The Maven structure Example Dockerfile file in src/main/docker The application configuration file Table 10.1. Properties used in the mvn io.quarkus:quarkus-maven-plugin:2.13.Final-redhat-00006:create command Property Description projectGroupId The group ID of the project. projectArtifactId The artifact ID of the project. extensions A comma-separated list of Quarkus extensions to use with this project. For a full list of Quarkus extensions, enter mvn quarkus:list-extensions on the command line. noExamples Creates a project with the project structure but without tests or classes. The values of the projectGroupID and the projectArtifactID properties are used to generate the project version. The default project version is 1.0.0-SNAPSHOT . To view your OptaPlanner project, change directory to the OptaPlanner Quickstarts directory: Review the pom.xml file. The content should be similar to the following example: 10.2. Model the domain objects The goal of the Red Hat build of OptaPlanner timetable project is to assign each lesson to a time slot and a room. To do this, add three classes, Timeslot , Lesson , and Room , as shown in the following diagram: Timeslot The Timeslot class represents a time interval when lessons are taught, for example, Monday 10:30 - 11:30 or Tuesday 13:30 - 14:30 . In this example, all time slots have the same duration and there are no time slots during lunch or other breaks. A time slot has no date because a high school schedule just repeats every week. There is no need for continuous planning . A timeslot is called a problem fact because no Timeslot instances change during solving. Such classes do not require any OptaPlanner-specific annotations. Room The Room class represents a location where lessons are taught, for example, Room A or Room B . In this example, all rooms are without capacity limits and they can accommodate all lessons. Room instances do not change during solving so Room is also a problem fact . Lesson During a lesson, represented by the Lesson class, a teacher teaches a subject to a group of students, for example, Math by A.Turing for 9th grade or Chemistry by M.Curie for 10th grade . If a subject is taught multiple times each week by the same teacher to the same student group, there are multiple Lesson instances that are only distinguishable by id . For example, the 9th grade has six math lessons a week. During solving, OptaPlanner changes the timeslot and room fields of the Lesson class to assign each lesson to a time slot and a room. Because OptaPlanner changes these fields, Lesson is a planning entity : Most of the fields in the diagram contain input data, except for the orange fields. A lesson's timeslot and room fields are unassigned ( null ) in the input data and assigned (not null ) in the output data. OptaPlanner changes these fields during solving. Such fields are called planning variables. In order for OptaPlanner to recognize them, both the timeslot and room fields require an @PlanningVariable annotation. Their containing class, Lesson , requires an @PlanningEntity annotation. Procedure Create the src/main/java/com/example/domain/Timeslot.java class: package com.example.domain; import java.time.DayOfWeek; import java.time.LocalTime; public class Timeslot { private DayOfWeek dayOfWeek; private LocalTime startTime; private LocalTime endTime; private Timeslot() { } public Timeslot(DayOfWeek dayOfWeek, LocalTime startTime, LocalTime endTime) { this.dayOfWeek = dayOfWeek; this.startTime = startTime; this.endTime = endTime; } @Override public String toString() { return dayOfWeek + " " + startTime.toString(); } // ******************************** // Getters and setters // ******************************** public DayOfWeek getDayOfWeek() { return dayOfWeek; } public LocalTime getStartTime() { return startTime; } public LocalTime getEndTime() { return endTime; } } Notice the toString() method keeps the output short so it is easier to read OptaPlanner's DEBUG or TRACE log, as shown later. Create the src/main/java/com/example/domain/Room.java class: package com.example.domain; public class Room { private String name; private Room() { } public Room(String name) { this.name = name; } @Override public String toString() { return name; } // ******************************** // Getters and setters // ******************************** public String getName() { return name; } } Create the src/main/java/com/example/domain/Lesson.java class: package com.example.domain; import org.optaplanner.core.api.domain.entity.PlanningEntity; import org.optaplanner.core.api.domain.variable.PlanningVariable; @PlanningEntity public class Lesson { private Long id; private String subject; private String teacher; private String studentGroup; @PlanningVariable(valueRangeProviderRefs = "timeslotRange") private Timeslot timeslot; @PlanningVariable(valueRangeProviderRefs = "roomRange") private Room room; private Lesson() { } public Lesson(Long id, String subject, String teacher, String studentGroup) { this.id = id; this.subject = subject; this.teacher = teacher; this.studentGroup = studentGroup; } @Override public String toString() { return subject + "(" + id + ")"; } // ******************************** // Getters and setters // ******************************** public Long getId() { return id; } public String getSubject() { return subject; } public String getTeacher() { return teacher; } public String getStudentGroup() { return studentGroup; } public Timeslot getTimeslot() { return timeslot; } public void setTimeslot(Timeslot timeslot) { this.timeslot = timeslot; } public Room getRoom() { return room; } public void setRoom(Room room) { this.room = room; } } The Lesson class has an @PlanningEntity annotation, so OptaPlanner knows that this class changes during solving because it contains one or more planning variables. The timeslot field has an @PlanningVariable annotation, so OptaPlanner knows that it can change its value. In order to find potential Timeslot instances to assign to this field, OptaPlanner uses the valueRangeProviderRefs property to connect to a value range provider that provides a List<Timeslot> to pick from. See Section 10.4, "Gather the domain objects in a planning solution" for information about value range providers. The room field also has an @PlanningVariable annotation for the same reasons. 10.3. Define the constraints and calculate the score When solving a problem, a score represents the quality of a specific solution. The higher the score the better. Red Hat build of OptaPlanner looks for the best solution, which is the solution with the highest score found in the available time. It might be the optimal solution. Because the timetable example use case has hard and soft constraints, use the HardSoftScore class to represent the score: Hard constraints must not be broken. For example: A room can have at most one lesson at the same time. Soft constraints should not be broken. For example: A teacher prefers to teach in a single room. Hard constraints are weighted against other hard constraints. Soft constraints are weighted against other soft constraints. Hard constraints always outweigh soft constraints, regardless of their respective weights. To calculate the score, you could implement an EasyScoreCalculator class: public class TimeTableEasyScoreCalculator implements EasyScoreCalculator<TimeTable> { @Override public HardSoftScore calculateScore(TimeTable timeTable) { List<Lesson> lessonList = timeTable.getLessonList(); int hardScore = 0; for (Lesson a : lessonList) { for (Lesson b : lessonList) { if (a.getTimeslot() != null && a.getTimeslot().equals(b.getTimeslot()) && a.getId() < b.getId()) { // A room can accommodate at most one lesson at the same time. if (a.getRoom() != null && a.getRoom().equals(b.getRoom())) { hardScore--; } // A teacher can teach at most one lesson at the same time. if (a.getTeacher().equals(b.getTeacher())) { hardScore--; } // A student can attend at most one lesson at the same time. if (a.getStudentGroup().equals(b.getStudentGroup())) { hardScore--; } } } } int softScore = 0; // Soft constraints are only implemented in the "complete" implementation return HardSoftScore.of(hardScore, softScore); } } Unfortunately, this solution does not scale well because it is non-incremental: every time a lesson is assigned to a different time slot or room, all lessons are re-evaluated to calculate the new score. A better solution is to create a src/main/java/com/example/solver/TimeTableConstraintProvider.java class to perform incremental score calculation. This class uses OptaPlanner's ConstraintStream API which is inspired by Java 8 Streams and SQL. The ConstraintProvider scales an order of magnitude better than the EasyScoreCalculator : O (n) instead of O (n2). Procedure Create the following src/main/java/com/example/solver/TimeTableConstraintProvider.java class: package com.example.solver; import com.example.domain.Lesson; import org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore; import org.optaplanner.core.api.score.stream.Constraint; import org.optaplanner.core.api.score.stream.ConstraintFactory; import org.optaplanner.core.api.score.stream.ConstraintProvider; import org.optaplanner.core.api.score.stream.Joiners; public class TimeTableConstraintProvider implements ConstraintProvider { @Override public Constraint[] defineConstraints(ConstraintFactory constraintFactory) { return new Constraint[] { // Hard constraints roomConflict(constraintFactory), teacherConflict(constraintFactory), studentGroupConflict(constraintFactory), // Soft constraints are only implemented in the "complete" implementation }; } private Constraint roomConflict(ConstraintFactory constraintFactory) { // A room can accommodate at most one lesson at the same time. // Select a lesson ... return constraintFactory.from(Lesson.class) // ... and pair it with another lesson ... .join(Lesson.class, // ... in the same timeslot ... Joiners.equal(Lesson::getTimeslot), // ... in the same room ... Joiners.equal(Lesson::getRoom), // ... and the pair is unique (different id, no reverse pairs) Joiners.lessThan(Lesson::getId)) // then penalize each pair with a hard weight. .penalize("Room conflict", HardSoftScore.ONE_HARD); } private Constraint teacherConflict(ConstraintFactory constraintFactory) { // A teacher can teach at most one lesson at the same time. return constraintFactory.from(Lesson.class) .join(Lesson.class, Joiners.equal(Lesson::getTimeslot), Joiners.equal(Lesson::getTeacher), Joiners.lessThan(Lesson::getId)) .penalize("Teacher conflict", HardSoftScore.ONE_HARD); } private Constraint studentGroupConflict(ConstraintFactory constraintFactory) { // A student can attend at most one lesson at the same time. return constraintFactory.from(Lesson.class) .join(Lesson.class, Joiners.equal(Lesson::getTimeslot), Joiners.equal(Lesson::getStudentGroup), Joiners.lessThan(Lesson::getId)) .penalize("Student group conflict", HardSoftScore.ONE_HARD); } } 10.4. Gather the domain objects in a planning solution A TimeTable instance wraps all Timeslot , Room , and Lesson instances of a single dataset. Furthermore, because it contains all lessons, each with a specific planning variable state, it is a planning solution and it has a score: If lessons are still unassigned, then it is an uninitialized solution, for example, a solution with the score -4init/0hard/0soft . If it breaks hard constraints, then it is an infeasible solution, for example, a solution with the score -2hard/-3soft . If it adheres to all hard constraints, then it is a feasible solution, for example, a solution with the score 0hard/-7soft . The TimeTable class has an @PlanningSolution annotation, so Red Hat build of OptaPlanner knows that this class contains all of the input and output data. Specifically, this class is the input of the problem: A timeslotList field with all time slots This is a list of problem facts, because they do not change during solving. A roomList field with all rooms This is a list of problem facts, because they do not change during solving. A lessonList field with all lessons This is a list of planning entities because they change during solving. Of each Lesson : The values of the timeslot and room fields are typically still null , so unassigned. They are planning variables. The other fields, such as subject , teacher and studentGroup , are filled in. These fields are problem properties. However, this class is also the output of the solution: A lessonList field for which each Lesson instance has non-null timeslot and room fields after solving A score field that represents the quality of the output solution, for example, 0hard/-5soft Procedure Create the src/main/java/com/example/domain/TimeTable.java class: package com.example.domain; import java.util.List; import org.optaplanner.core.api.domain.solution.PlanningEntityCollectionProperty; import org.optaplanner.core.api.domain.solution.PlanningScore; import org.optaplanner.core.api.domain.solution.PlanningSolution; import org.optaplanner.core.api.domain.solution.ProblemFactCollectionProperty; import org.optaplanner.core.api.domain.valuerange.ValueRangeProvider; import org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore; @PlanningSolution public class TimeTable { @ValueRangeProvider(id = "timeslotRange") @ProblemFactCollectionProperty private List<Timeslot> timeslotList; @ValueRangeProvider(id = "roomRange") @ProblemFactCollectionProperty private List<Room> roomList; @PlanningEntityCollectionProperty private List<Lesson> lessonList; @PlanningScore private HardSoftScore score; private TimeTable() { } public TimeTable(List<Timeslot> timeslotList, List<Room> roomList, List<Lesson> lessonList) { this.timeslotList = timeslotList; this.roomList = roomList; this.lessonList = lessonList; } // ******************************** // Getters and setters // ******************************** public List<Timeslot> getTimeslotList() { return timeslotList; } public List<Room> getRoomList() { return roomList; } public List<Lesson> getLessonList() { return lessonList; } public HardSoftScore getScore() { return score; } } The value range providers The timeslotList field is a value range provider. It holds the Timeslot instances which OptaPlanner can pick from to assign to the timeslot field of Lesson instances. The timeslotList field has an @ValueRangeProvider annotation to connect those two, by matching the id with the valueRangeProviderRefs of the @PlanningVariable in the Lesson . Following the same logic, the roomList field also has an @ValueRangeProvider annotation. The problem fact and planning entity properties Furthermore, OptaPlanner needs to know which Lesson instances it can change as well as how to retrieve the Timeslot and Room instances used for score calculation by your TimeTableConstraintProvider . The timeslotList and roomList fields have an @ProblemFactCollectionProperty annotation, so your TimeTableConstraintProvider can select from those instances. The lessonList has an @PlanningEntityCollectionProperty annotation, so OptaPlanner can change them during solving and your TimeTableConstraintProvider can select from those too. 10.5. Create the solver service Solving planning problems on REST threads causes HTTP timeout issues. Therefore, the Quarkus extension injects a SolverManager, which runs solvers in a separate thread pool and can solve multiple data sets in parallel. Procedure Create the src/main/java/org/acme/optaplanner/rest/TimeTableResource.java class: package org.acme.optaplanner.rest; import java.util.UUID; import java.util.concurrent.ExecutionException; import javax.inject.Inject; import javax.ws.rs.POST; import javax.ws.rs.Path; import org.acme.optaplanner.domain.TimeTable; import org.optaplanner.core.api.solver.SolverJob; import org.optaplanner.core.api.solver.SolverManager; @Path("/timeTable") public class TimeTableResource { @Inject SolverManager<TimeTable, UUID> solverManager; @POST @Path("/solve") public TimeTable solve(TimeTable problem) { UUID problemId = UUID.randomUUID(); // Submit the problem to start solving SolverJob<TimeTable, UUID> solverJob = solverManager.solve(problemId, problem); TimeTable solution; try { // Wait until the solving ends solution = solverJob.getFinalBestSolution(); } catch (InterruptedException | ExecutionException e) { throw new IllegalStateException("Solving failed.", e); } return solution; } } This initial implementation waits for the solver to finish, which can still cause an HTTP timeout. The complete implementation avoids HTTP timeouts much more elegantly. 10.6. Set the solver termination time If your planning application does not have a termination setting or a termination event, it theoretically runs forever and in reality eventually causes an HTTP timeout error. To prevent this from occurring, use the optaplanner.solver.termination.spent-limit parameter to specify the length of time after which the application terminates. In most applications, set the time to at least five minutes ( 5m ). However, in the Timetable example, limit the solving time to five seconds, which is short enough to avoid the HTTP timeout. Procedure Create the src/main/resources/application.properties file with the following content: quarkus.optaplanner.solver.termination.spent-limit=5s 10.7. Running the school timetable application After you have created the school timetable project, run it in development mode. In development mode, you can update the application sources and configurations while your application is running. Your changes will appear in the running application. Prerequisites You have created the school timetable project. Procedure To compile the application in development mode, enter the following command from the project directory: Test the REST service. You can use any REST client. The following example uses the Linux command curl to send a POST request: After the time period specified in termination spent time defined in your application.properties file, the service returns output similar to the following example: Notice that your application assigned all four lessons to one of the two time slots and one of the two rooms. Also notice that it conforms to all hard constraints. For example, M. Curie's two lessons are in different time slots. To review what OptaPlanner did during the solving time, review the info log on the server side. The following is sample info log output: 10.8. Testing the application A good application includes test coverage. Test the constraints and the solver in your timetable project. 10.8.1. Test the school timetable constraints To test each constraint of the timetable project in isolation, use a ConstraintVerifier in unit tests. This tests each constraint's corner cases in isolation from the other tests, which lowers maintenance when adding a new constraint with proper test coverage. This test verifies that the constraint TimeTableConstraintProvider::roomConflict , when given three lessons in the same room and two of the lessons have the same timeslot, penalizes with a match weight of 1. So if the constraint weight is 10hard it reduces the score by -10hard . Procedure Create the src/test/java/org/acme/optaplanner/solver/TimeTableConstraintProviderTest.java class: package org.acme.optaplanner.solver; import java.time.DayOfWeek; import java.time.LocalTime; import javax.inject.Inject; import io.quarkus.test.junit.QuarkusTest; import org.acme.optaplanner.domain.Lesson; import org.acme.optaplanner.domain.Room; import org.acme.optaplanner.domain.TimeTable; import org.acme.optaplanner.domain.Timeslot; import org.junit.jupiter.api.Test; import org.optaplanner.test.api.score.stream.ConstraintVerifier; @QuarkusTest class TimeTableConstraintProviderTest { private static final Room ROOM = new Room("Room1"); private static final Timeslot TIMESLOT1 = new Timeslot(DayOfWeek.MONDAY, LocalTime.of(9,0), LocalTime.NOON); private static final Timeslot TIMESLOT2 = new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(9,0), LocalTime.NOON); @Inject ConstraintVerifier<TimeTableConstraintProvider, TimeTable> constraintVerifier; @Test void roomConflict() { Lesson firstLesson = new Lesson(1, "Subject1", "Teacher1", "Group1"); Lesson conflictingLesson = new Lesson(2, "Subject2", "Teacher2", "Group2"); Lesson nonConflictingLesson = new Lesson(3, "Subject3", "Teacher3", "Group3"); firstLesson.setRoom(ROOM); firstLesson.setTimeslot(TIMESLOT1); conflictingLesson.setRoom(ROOM); conflictingLesson.setTimeslot(TIMESLOT1); nonConflictingLesson.setRoom(ROOM); nonConflictingLesson.setTimeslot(TIMESLOT2); constraintVerifier.verifyThat(TimeTableConstraintProvider::roomConflict) .given(firstLesson, conflictingLesson, nonConflictingLesson) .penalizesBy(1); } } Notice how ConstraintVerifier ignores the constraint weight during testing even if those constraint weights are hardcoded in the ConstraintProvider . This is because constraint weights change regularly before going into production. This way, constraint weight tweaking does not break the unit tests. 10.8.2. Test the school timetable solver This example tests the Red Hat build of OptaPlanner school timetable project on Red Hat build of Quarkus. It uses a JUnit test to generate a test data set and send it to the TimeTableController to solve. Procedure Create the src/test/java/com/example/rest/TimeTableResourceTest.java class with the following content: package com.exmaple.optaplanner.rest; import java.time.DayOfWeek; import java.time.LocalTime; import java.util.ArrayList; import java.util.List; import javax.inject.Inject; import io.quarkus.test.junit.QuarkusTest; import com.exmaple.optaplanner.domain.Room; import com.exmaple.optaplanner.domain.Timeslot; import com.exmaple.optaplanner.domain.Lesson; import com.exmaple.optaplanner.domain.TimeTable; import com.exmaple.optaplanner.rest.TimeTableResource; import org.junit.jupiter.api.Test; import org.junit.jupiter.api.Timeout; import static org.junit.jupiter.api.Assertions.assertFalse; import static org.junit.jupiter.api.Assertions.assertNotNull; import static org.junit.jupiter.api.Assertions.assertTrue; @QuarkusTest public class TimeTableResourceTest { @Inject TimeTableResource timeTableResource; @Test @Timeout(600_000) public void solve() { TimeTable problem = generateProblem(); TimeTable solution = timeTableResource.solve(problem); assertFalse(solution.getLessonList().isEmpty()); for (Lesson lesson : solution.getLessonList()) { assertNotNull(lesson.getTimeslot()); assertNotNull(lesson.getRoom()); } assertTrue(solution.getScore().isFeasible()); } private TimeTable generateProblem() { List<Timeslot> timeslotList = new ArrayList<>(); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(8, 30), LocalTime.of(9, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(9, 30), LocalTime.of(10, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(10, 30), LocalTime.of(11, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(13, 30), LocalTime.of(14, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(14, 30), LocalTime.of(15, 30))); List<Room> roomList = new ArrayList<>(); roomList.add(new Room("Room A")); roomList.add(new Room("Room B")); roomList.add(new Room("Room C")); List<Lesson> lessonList = new ArrayList<>(); lessonList.add(new Lesson(101L, "Math", "B. May", "9th grade")); lessonList.add(new Lesson(102L, "Physics", "M. Curie", "9th grade")); lessonList.add(new Lesson(103L, "Geography", "M. Polo", "9th grade")); lessonList.add(new Lesson(104L, "English", "I. Jones", "9th grade")); lessonList.add(new Lesson(105L, "Spanish", "P. Cruz", "9th grade")); lessonList.add(new Lesson(201L, "Math", "B. May", "10th grade")); lessonList.add(new Lesson(202L, "Chemistry", "M. Curie", "10th grade")); lessonList.add(new Lesson(203L, "History", "I. Jones", "10th grade")); lessonList.add(new Lesson(204L, "English", "P. Cruz", "10th grade")); lessonList.add(new Lesson(205L, "French", "M. Curie", "10th grade")); return new TimeTable(timeslotList, roomList, lessonList); } } This test verifies that after solving, all lessons are assigned to a time slot and a room. It also verifies that it found a feasible solution (no hard constraints broken). Add test properties to the src/main/resources/application.properties file: Normally, the solver finds a feasible solution in less than 200 milliseconds. Notice how the application.properties file overwrites the solver termination during tests to terminate as soon as a feasible solution (0hard/*soft) is found. This avoids hard coding a solver time, because the unit test might run on arbitrary hardware. This approach ensures that the test runs long enough to find a feasible solution, even on slow systems. But it does not run a millisecond longer than it strictly must, even on fast systems. 10.9. Logging After you complete the Red Hat build of OptaPlanner school timetable project, you can use logging information to help you fine-tune the constraints in the ConstraintProvider . Review the score calculation speed in the info log file to assess the impact of changes to your constraints. Run the application in debug mode to show every step that your application takes or use trace logging to log every step and every move. Procedure Run the school timetable application for a fixed amount of time, for example, five minutes. Review the score calculation speed in the log file as shown in the following example: Change a constraint, run the planning application again for the same amount of time, and review the score calculation speed recorded in the log file. Run the application in debug mode to log every step that the application makes: To run debug mode from the command line, use the -D system property. To permanently enable debug mode, add the following line to the application.properties file: quarkus.log.category."org.optaplanner".level=debug The following example shows output in the log file in debug mode: Use trace logging to show every step and every move for each step. 10.10. Integrating a database with your Quarkus OptaPlanner school timetable application After you create your Quarkus OptaPlanner school timetable application, you can integrate it with a database and create a web-based user interface to display the timetable. Prerequisites You have a Quarkus OptaPlanner school timetable application. Procedure Use Hibernate and Panache to store Timeslot , Room , and Lesson instances in a database. See Simplified Hibernate ORM with Panache for more information. Expose the instances through REST. For information, see Writing JSON REST Services . Update the TimeTableResource class to read and write a TimeTable instance in a single transaction: This example includes a TimeTable instance. However, you can enable multi-tenancy and handle TimeTable instances for multiple schools in parallel. The getTimeTable() method returns the latest timetable from the database. It uses the ScoreManager method, which is automatically injected, to calculate the score of that timetable and make it available to the UI. The solve() method starts a job to solve the current timetable and stores the time slot and room assignments in the database. It uses the SolverManager.solveAndListen() method to listen to intermediate best solutions and update the database accordingly. The UI uses this to show progress while the backend is still solving. Update the TimeTableResourceTest class to reflect that the solve() method returns immediately and to poll for the latest solution until the solver finishes solving: Build a web UI on top of these REST methods to provide a visual representation of the timetable. Review the quickstart source code . 10.11. Using Micrometer and Prometheus to monitor your school timetable OptaPlanner Quarkus application OptaPlanner exposes metrics through Micrometer , a metrics instrumentation library for Java applications. You can use Micrometer with Prometheus to monitor the OptaPlanner solver in the school timetable application. Prerequisites You have created the Quarkus OptaPlanner school timetable application. Prometheus is installed. For information about installing Prometheus, see the Prometheus website. Procedure Add the Micrometer Prometheus dependency to the school timetable pom.xml file: Start the school timetable application: Open http://localhost:8080/q/metric in a web browser.
[ "mvn --version", "mvn com.redhat.quarkus.platform:quarkus-maven-plugin:2.13.Final-redhat-00006:create -DprojectGroupId=com.example -DprojectArtifactId=optaplanner-quickstart -Dextensions=\"resteasy,resteasy-jackson,optaplanner-quarkus,optaplanner-quarkus-jackson\" -DplatformGroupId=com.redhat.quarkus.platform -DplatformVersion=2.13.Final-redhat-00006 -DnoExamples", "cd optaplanner-quickstart", "<dependencyManagement> <dependencies> <dependency> <groupId>io.quarkus.platform</groupId> <artifactId>quarkus-bom</artifactId> <version>2.13.Final-redhat-00006</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>io.quarkus.platform</groupId> <artifactId>quarkus-optaplanner-bom</artifactId> <version>2.13.Final-redhat-00006</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy-jackson</artifactId> </dependency> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-quarkus</artifactId> </dependency> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-quarkus-jackson</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5</artifactId> <scope>test</scope> </dependency> </dependencies>", "package com.example.domain; import java.time.DayOfWeek; import java.time.LocalTime; public class Timeslot { private DayOfWeek dayOfWeek; private LocalTime startTime; private LocalTime endTime; private Timeslot() { } public Timeslot(DayOfWeek dayOfWeek, LocalTime startTime, LocalTime endTime) { this.dayOfWeek = dayOfWeek; this.startTime = startTime; this.endTime = endTime; } @Override public String toString() { return dayOfWeek + \" \" + startTime.toString(); } // ******************************** // Getters and setters // ******************************** public DayOfWeek getDayOfWeek() { return dayOfWeek; } public LocalTime getStartTime() { return startTime; } public LocalTime getEndTime() { return endTime; } }", "package com.example.domain; public class Room { private String name; private Room() { } public Room(String name) { this.name = name; } @Override public String toString() { return name; } // ******************************** // Getters and setters // ******************************** public String getName() { return name; } }", "package com.example.domain; import org.optaplanner.core.api.domain.entity.PlanningEntity; import org.optaplanner.core.api.domain.variable.PlanningVariable; @PlanningEntity public class Lesson { private Long id; private String subject; private String teacher; private String studentGroup; @PlanningVariable(valueRangeProviderRefs = \"timeslotRange\") private Timeslot timeslot; @PlanningVariable(valueRangeProviderRefs = \"roomRange\") private Room room; private Lesson() { } public Lesson(Long id, String subject, String teacher, String studentGroup) { this.id = id; this.subject = subject; this.teacher = teacher; this.studentGroup = studentGroup; } @Override public String toString() { return subject + \"(\" + id + \")\"; } // ******************************** // Getters and setters // ******************************** public Long getId() { return id; } public String getSubject() { return subject; } public String getTeacher() { return teacher; } public String getStudentGroup() { return studentGroup; } public Timeslot getTimeslot() { return timeslot; } public void setTimeslot(Timeslot timeslot) { this.timeslot = timeslot; } public Room getRoom() { return room; } public void setRoom(Room room) { this.room = room; } }", "public class TimeTableEasyScoreCalculator implements EasyScoreCalculator<TimeTable> { @Override public HardSoftScore calculateScore(TimeTable timeTable) { List<Lesson> lessonList = timeTable.getLessonList(); int hardScore = 0; for (Lesson a : lessonList) { for (Lesson b : lessonList) { if (a.getTimeslot() != null && a.getTimeslot().equals(b.getTimeslot()) && a.getId() < b.getId()) { // A room can accommodate at most one lesson at the same time. if (a.getRoom() != null && a.getRoom().equals(b.getRoom())) { hardScore--; } // A teacher can teach at most one lesson at the same time. if (a.getTeacher().equals(b.getTeacher())) { hardScore--; } // A student can attend at most one lesson at the same time. if (a.getStudentGroup().equals(b.getStudentGroup())) { hardScore--; } } } } int softScore = 0; // Soft constraints are only implemented in the \"complete\" implementation return HardSoftScore.of(hardScore, softScore); } }", "package com.example.solver; import com.example.domain.Lesson; import org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore; import org.optaplanner.core.api.score.stream.Constraint; import org.optaplanner.core.api.score.stream.ConstraintFactory; import org.optaplanner.core.api.score.stream.ConstraintProvider; import org.optaplanner.core.api.score.stream.Joiners; public class TimeTableConstraintProvider implements ConstraintProvider { @Override public Constraint[] defineConstraints(ConstraintFactory constraintFactory) { return new Constraint[] { // Hard constraints roomConflict(constraintFactory), teacherConflict(constraintFactory), studentGroupConflict(constraintFactory), // Soft constraints are only implemented in the \"complete\" implementation }; } private Constraint roomConflict(ConstraintFactory constraintFactory) { // A room can accommodate at most one lesson at the same time. // Select a lesson return constraintFactory.from(Lesson.class) // ... and pair it with another lesson .join(Lesson.class, // ... in the same timeslot Joiners.equal(Lesson::getTimeslot), // ... in the same room Joiners.equal(Lesson::getRoom), // ... and the pair is unique (different id, no reverse pairs) Joiners.lessThan(Lesson::getId)) // then penalize each pair with a hard weight. .penalize(\"Room conflict\", HardSoftScore.ONE_HARD); } private Constraint teacherConflict(ConstraintFactory constraintFactory) { // A teacher can teach at most one lesson at the same time. return constraintFactory.from(Lesson.class) .join(Lesson.class, Joiners.equal(Lesson::getTimeslot), Joiners.equal(Lesson::getTeacher), Joiners.lessThan(Lesson::getId)) .penalize(\"Teacher conflict\", HardSoftScore.ONE_HARD); } private Constraint studentGroupConflict(ConstraintFactory constraintFactory) { // A student can attend at most one lesson at the same time. return constraintFactory.from(Lesson.class) .join(Lesson.class, Joiners.equal(Lesson::getTimeslot), Joiners.equal(Lesson::getStudentGroup), Joiners.lessThan(Lesson::getId)) .penalize(\"Student group conflict\", HardSoftScore.ONE_HARD); } }", "package com.example.domain; import java.util.List; import org.optaplanner.core.api.domain.solution.PlanningEntityCollectionProperty; import org.optaplanner.core.api.domain.solution.PlanningScore; import org.optaplanner.core.api.domain.solution.PlanningSolution; import org.optaplanner.core.api.domain.solution.ProblemFactCollectionProperty; import org.optaplanner.core.api.domain.valuerange.ValueRangeProvider; import org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore; @PlanningSolution public class TimeTable { @ValueRangeProvider(id = \"timeslotRange\") @ProblemFactCollectionProperty private List<Timeslot> timeslotList; @ValueRangeProvider(id = \"roomRange\") @ProblemFactCollectionProperty private List<Room> roomList; @PlanningEntityCollectionProperty private List<Lesson> lessonList; @PlanningScore private HardSoftScore score; private TimeTable() { } public TimeTable(List<Timeslot> timeslotList, List<Room> roomList, List<Lesson> lessonList) { this.timeslotList = timeslotList; this.roomList = roomList; this.lessonList = lessonList; } // ******************************** // Getters and setters // ******************************** public List<Timeslot> getTimeslotList() { return timeslotList; } public List<Room> getRoomList() { return roomList; } public List<Lesson> getLessonList() { return lessonList; } public HardSoftScore getScore() { return score; } }", "package org.acme.optaplanner.rest; import java.util.UUID; import java.util.concurrent.ExecutionException; import javax.inject.Inject; import javax.ws.rs.POST; import javax.ws.rs.Path; import org.acme.optaplanner.domain.TimeTable; import org.optaplanner.core.api.solver.SolverJob; import org.optaplanner.core.api.solver.SolverManager; @Path(\"/timeTable\") public class TimeTableResource { @Inject SolverManager<TimeTable, UUID> solverManager; @POST @Path(\"/solve\") public TimeTable solve(TimeTable problem) { UUID problemId = UUID.randomUUID(); // Submit the problem to start solving SolverJob<TimeTable, UUID> solverJob = solverManager.solve(problemId, problem); TimeTable solution; try { // Wait until the solving ends solution = solverJob.getFinalBestSolution(); } catch (InterruptedException | ExecutionException e) { throw new IllegalStateException(\"Solving failed.\", e); } return solution; } }", "quarkus.optaplanner.solver.termination.spent-limit=5s", "./mvnw compile quarkus:dev", "curl -i -X POST http://localhost:8080/timeTable/solve -H \"Content-Type:application/json\" -d '{\"timeslotList\":[{\"dayOfWeek\":\"MONDAY\",\"startTime\":\"08:30:00\",\"endTime\":\"09:30:00\"},{\"dayOfWeek\":\"MONDAY\",\"startTime\":\"09:30:00\",\"endTime\":\"10:30:00\"}],\"roomList\":[{\"name\":\"Room A\"},{\"name\":\"Room B\"}],\"lessonList\":[{\"id\":1,\"subject\":\"Math\",\"teacher\":\"A. Turing\",\"studentGroup\":\"9th grade\"},{\"id\":2,\"subject\":\"Chemistry\",\"teacher\":\"M. Curie\",\"studentGroup\":\"9th grade\"},{\"id\":3,\"subject\":\"French\",\"teacher\":\"M. Curie\",\"studentGroup\":\"10th grade\"},{\"id\":4,\"subject\":\"History\",\"teacher\":\"I. Jones\",\"studentGroup\":\"10th grade\"}]}'", "HTTP/1.1 200 Content-Type: application/json {\"timeslotList\":...,\"roomList\":...,\"lessonList\":[{\"id\":1,\"subject\":\"Math\",\"teacher\":\"A. Turing\",\"studentGroup\":\"9th grade\",\"timeslot\":{\"dayOfWeek\":\"MONDAY\",\"startTime\":\"08:30:00\",\"endTime\":\"09:30:00\"},\"room\":{\"name\":\"Room A\"}},{\"id\":2,\"subject\":\"Chemistry\",\"teacher\":\"M. Curie\",\"studentGroup\":\"9th grade\",\"timeslot\":{\"dayOfWeek\":\"MONDAY\",\"startTime\":\"09:30:00\",\"endTime\":\"10:30:00\"},\"room\":{\"name\":\"Room A\"}},{\"id\":3,\"subject\":\"French\",\"teacher\":\"M. Curie\",\"studentGroup\":\"10th grade\",\"timeslot\":{\"dayOfWeek\":\"MONDAY\",\"startTime\":\"08:30:00\",\"endTime\":\"09:30:00\"},\"room\":{\"name\":\"Room B\"}},{\"id\":4,\"subject\":\"History\",\"teacher\":\"I. Jones\",\"studentGroup\":\"10th grade\",\"timeslot\":{\"dayOfWeek\":\"MONDAY\",\"startTime\":\"09:30:00\",\"endTime\":\"10:30:00\"},\"room\":{\"name\":\"Room B\"}}],\"score\":\"0hard/0soft\"}", "... Solving started: time spent (33), best score (-8init/0hard/0soft), environment mode (REPRODUCIBLE), random (JDK with seed 0). ... Construction Heuristic phase (0) ended: time spent (73), best score (0hard/0soft), score calculation speed (459/sec), step total (4). ... Local Search phase (1) ended: time spent (5000), best score (0hard/0soft), score calculation speed (28949/sec), step total (28398). ... Solving ended: time spent (5000), best score (0hard/0soft), score calculation speed (28524/sec), phase total (2), environment mode (REPRODUCIBLE).", "package org.acme.optaplanner.solver; import java.time.DayOfWeek; import java.time.LocalTime; import javax.inject.Inject; import io.quarkus.test.junit.QuarkusTest; import org.acme.optaplanner.domain.Lesson; import org.acme.optaplanner.domain.Room; import org.acme.optaplanner.domain.TimeTable; import org.acme.optaplanner.domain.Timeslot; import org.junit.jupiter.api.Test; import org.optaplanner.test.api.score.stream.ConstraintVerifier; @QuarkusTest class TimeTableConstraintProviderTest { private static final Room ROOM = new Room(\"Room1\"); private static final Timeslot TIMESLOT1 = new Timeslot(DayOfWeek.MONDAY, LocalTime.of(9,0), LocalTime.NOON); private static final Timeslot TIMESLOT2 = new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(9,0), LocalTime.NOON); @Inject ConstraintVerifier<TimeTableConstraintProvider, TimeTable> constraintVerifier; @Test void roomConflict() { Lesson firstLesson = new Lesson(1, \"Subject1\", \"Teacher1\", \"Group1\"); Lesson conflictingLesson = new Lesson(2, \"Subject2\", \"Teacher2\", \"Group2\"); Lesson nonConflictingLesson = new Lesson(3, \"Subject3\", \"Teacher3\", \"Group3\"); firstLesson.setRoom(ROOM); firstLesson.setTimeslot(TIMESLOT1); conflictingLesson.setRoom(ROOM); conflictingLesson.setTimeslot(TIMESLOT1); nonConflictingLesson.setRoom(ROOM); nonConflictingLesson.setTimeslot(TIMESLOT2); constraintVerifier.verifyThat(TimeTableConstraintProvider::roomConflict) .given(firstLesson, conflictingLesson, nonConflictingLesson) .penalizesBy(1); } }", "package com.exmaple.optaplanner.rest; import java.time.DayOfWeek; import java.time.LocalTime; import java.util.ArrayList; import java.util.List; import javax.inject.Inject; import io.quarkus.test.junit.QuarkusTest; import com.exmaple.optaplanner.domain.Room; import com.exmaple.optaplanner.domain.Timeslot; import com.exmaple.optaplanner.domain.Lesson; import com.exmaple.optaplanner.domain.TimeTable; import com.exmaple.optaplanner.rest.TimeTableResource; import org.junit.jupiter.api.Test; import org.junit.jupiter.api.Timeout; import static org.junit.jupiter.api.Assertions.assertFalse; import static org.junit.jupiter.api.Assertions.assertNotNull; import static org.junit.jupiter.api.Assertions.assertTrue; @QuarkusTest public class TimeTableResourceTest { @Inject TimeTableResource timeTableResource; @Test @Timeout(600_000) public void solve() { TimeTable problem = generateProblem(); TimeTable solution = timeTableResource.solve(problem); assertFalse(solution.getLessonList().isEmpty()); for (Lesson lesson : solution.getLessonList()) { assertNotNull(lesson.getTimeslot()); assertNotNull(lesson.getRoom()); } assertTrue(solution.getScore().isFeasible()); } private TimeTable generateProblem() { List<Timeslot> timeslotList = new ArrayList<>(); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(8, 30), LocalTime.of(9, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(9, 30), LocalTime.of(10, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(10, 30), LocalTime.of(11, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(13, 30), LocalTime.of(14, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(14, 30), LocalTime.of(15, 30))); List<Room> roomList = new ArrayList<>(); roomList.add(new Room(\"Room A\")); roomList.add(new Room(\"Room B\")); roomList.add(new Room(\"Room C\")); List<Lesson> lessonList = new ArrayList<>(); lessonList.add(new Lesson(101L, \"Math\", \"B. May\", \"9th grade\")); lessonList.add(new Lesson(102L, \"Physics\", \"M. Curie\", \"9th grade\")); lessonList.add(new Lesson(103L, \"Geography\", \"M. Polo\", \"9th grade\")); lessonList.add(new Lesson(104L, \"English\", \"I. Jones\", \"9th grade\")); lessonList.add(new Lesson(105L, \"Spanish\", \"P. Cruz\", \"9th grade\")); lessonList.add(new Lesson(201L, \"Math\", \"B. May\", \"10th grade\")); lessonList.add(new Lesson(202L, \"Chemistry\", \"M. Curie\", \"10th grade\")); lessonList.add(new Lesson(203L, \"History\", \"I. Jones\", \"10th grade\")); lessonList.add(new Lesson(204L, \"English\", \"P. Cruz\", \"10th grade\")); lessonList.add(new Lesson(205L, \"French\", \"M. Curie\", \"10th grade\")); return new TimeTable(timeslotList, roomList, lessonList); } }", "The solver runs only for 5 seconds to avoid a HTTP timeout in this simple implementation. It's recommended to run for at least 5 minutes (\"5m\") otherwise. quarkus.optaplanner.solver.termination.spent-limit=5s Effectively disable this termination in favor of the best-score-limit %test.quarkus.optaplanner.solver.termination.spent-limit=1h %test.quarkus.optaplanner.solver.termination.best-score-limit=0hard/*soft", "... Solving ended: ..., score calculation speed (29455/sec),", "quarkus.log.category.\"org.optaplanner\".level=debug", "... Solving started: time spent (67), best score (-20init/0hard/0soft), environment mode (REPRODUCIBLE), random (JDK with seed 0). ... CH step (0), time spent (128), score (-18init/0hard/0soft), selected move count (15), picked move ([Math(101) {null -> Room A}, Math(101) {null -> MONDAY 08:30}]). ... CH step (1), time spent (145), score (-16init/0hard/0soft), selected move count (15), picked move ([Physics(102) {null -> Room A}, Physics(102) {null -> MONDAY 09:30}]).", "package org.acme.optaplanner.rest; import javax.inject.Inject; import javax.transaction.Transactional; import javax.ws.rs.GET; import javax.ws.rs.POST; import javax.ws.rs.Path; import io.quarkus.panache.common.Sort; import org.acme.optaplanner.domain.Lesson; import org.acme.optaplanner.domain.Room; import org.acme.optaplanner.domain.TimeTable; import org.acme.optaplanner.domain.Timeslot; import org.optaplanner.core.api.score.ScoreManager; import org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore; import org.optaplanner.core.api.solver.SolverManager; import org.optaplanner.core.api.solver.SolverStatus; @Path(\"/timeTable\") public class TimeTableResource { public static final Long SINGLETON_TIME_TABLE_ID = 1L; @Inject SolverManager<TimeTable, Long> solverManager; @Inject ScoreManager<TimeTable, HardSoftScore> scoreManager; // To try, open http://localhost:8080/timeTable @GET public TimeTable getTimeTable() { // Get the solver status before loading the solution // to avoid the race condition that the solver terminates between them SolverStatus solverStatus = getSolverStatus(); TimeTable solution = findById(SINGLETON_TIME_TABLE_ID); scoreManager.updateScore(solution); // Sets the score solution.setSolverStatus(solverStatus); return solution; } @POST @Path(\"/solve\") public void solve() { solverManager.solveAndListen(SINGLETON_TIME_TABLE_ID, this::findById, this::save); } public SolverStatus getSolverStatus() { return solverManager.getSolverStatus(SINGLETON_TIME_TABLE_ID); } @POST @Path(\"/stopSolving\") public void stopSolving() { solverManager.terminateEarly(SINGLETON_TIME_TABLE_ID); } @Transactional protected TimeTable findById(Long id) { if (!SINGLETON_TIME_TABLE_ID.equals(id)) { throw new IllegalStateException(\"There is no timeTable with id (\" + id + \").\"); } // Occurs in a single transaction, so each initialized lesson references the same timeslot/room instance // that is contained by the timeTable's timeslotList/roomList. return new TimeTable( Timeslot.listAll(Sort.by(\"dayOfWeek\").and(\"startTime\").and(\"endTime\").and(\"id\")), Room.listAll(Sort.by(\"name\").and(\"id\")), Lesson.listAll(Sort.by(\"subject\").and(\"teacher\").and(\"studentGroup\").and(\"id\"))); } @Transactional protected void save(TimeTable timeTable) { for (Lesson lesson : timeTable.getLessonList()) { // TODO this is awfully naive: optimistic locking causes issues if called by the SolverManager Lesson attachedLesson = Lesson.findById(lesson.getId()); attachedLesson.setTimeslot(lesson.getTimeslot()); attachedLesson.setRoom(lesson.getRoom()); } } }", "package org.acme.optaplanner.rest; import javax.inject.Inject; import io.quarkus.test.junit.QuarkusTest; import org.acme.optaplanner.domain.Lesson; import org.acme.optaplanner.domain.TimeTable; import org.junit.jupiter.api.Test; import org.junit.jupiter.api.Timeout; import org.optaplanner.core.api.solver.SolverStatus; import static org.junit.jupiter.api.Assertions.assertFalse; import static org.junit.jupiter.api.Assertions.assertNotNull; import static org.junit.jupiter.api.Assertions.assertTrue; @QuarkusTest public class TimeTableResourceTest { @Inject TimeTableResource timeTableResource; @Test @Timeout(600_000) public void solveDemoDataUntilFeasible() throws InterruptedException { timeTableResource.solve(); TimeTable timeTable = timeTableResource.getTimeTable(); while (timeTable.getSolverStatus() != SolverStatus.NOT_SOLVING) { // Quick polling (not a Test Thread Sleep anti-pattern) // Test is still fast on fast machines and doesn't randomly fail on slow machines. Thread.sleep(20L); timeTable = timeTableResource.getTimeTable(); } assertFalse(timeTable.getLessonList().isEmpty()); for (Lesson lesson : timeTable.getLessonList()) { assertNotNull(lesson.getTimeslot()); assertNotNull(lesson.getRoom()); } assertTrue(timeTable.getScore().isFeasible()); } }", "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-micrometer-registry-prometheus</artifactId> </dependency>", "mvn compile quarkus:dev" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_decision_manager/assembly-optaplanner-school-timetable-quarkus_optaplanner-quickstarts
Chapter 1. New features and enhancements
Chapter 1. New features and enhancements Red Hat JBoss Core Services 2.4.57 Service Pack 1 does not include any new features or enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_1_release_notes/new_features_and_enhancements
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_idm_healthcheck_to_monitor_your_idm_environment/proc_providing-feedback-on-red-hat-documentation_using-idm-healthcheck-to-monitor-your-idm-environment
14.22.10. Starting a (Previously Defined) Inactive Network
14.22.10. Starting a (Previously Defined) Inactive Network This command starts a (previously defined) inactive network. To do this, run:
[ "virsh net-start network" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-virtual_networking_commands-starting_a_previously_defined_inactive_network
Chapter 14. Troubleshooting monitoring issues
Chapter 14. Troubleshooting monitoring issues 14.1. Investigating why user-defined metrics are unavailable ServiceMonitor resources enable you to determine how to use the metrics exposed by a service in user-defined projects. Follow the steps outlined in this procedure if you have created a ServiceMonitor resource but cannot see any corresponding metrics in the Metrics UI. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). You have enabled and configured monitoring for user-defined workloads. You have created the user-workload-monitoring-config ConfigMap object. You have created a ServiceMonitor resource. Procedure Check that the corresponding labels match in the service and ServiceMonitor resource configurations. Obtain the label defined in the service. The following example queries the prometheus-example-app service in the ns1 project: USD oc -n ns1 get service prometheus-example-app -o yaml Example output labels: app: prometheus-example-app Check that the matchLabels app label in the ServiceMonitor resource configuration matches the label output in the preceding step: USD oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml Example output Note You can check service and ServiceMonitor resource labels as a developer with view permissions for the project. Inspect the logs for the Prometheus Operator in the openshift-user-workload-monitoring project. List the pods in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get pods Example output NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m Obtain the logs from the prometheus-operator container in the prometheus-operator pod. In the following example, the pod is called prometheus-operator-776fcbbd56-2nbfm : USD oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator If there is a issue with the service monitor, the logs might include an error similar to this example: level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg="skipping servicemonitor" error="it accesses file system via bearer token file which Prometheus specification prohibits" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload Review the target status for your endpoint on the Metrics targets page in the OpenShift Container Platform web console UI. Log in to the OpenShift Container Platform web console and navigate to Observe Targets in the Administrator perspective. Locate the metrics endpoint in the list, and review the status of the target in the Status column. If the Status is Down , click the URL for the endpoint to view more information on the Target Details page for that metrics target. Configure debug level logging for the Prometheus Operator in the openshift-user-workload-monitoring project. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add logLevel: debug for prometheusOperator under data/config.yaml to set the log level to debug : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug # ... Save the file to apply the changes. Note The prometheus-operator in the openshift-user-workload-monitoring project restarts automatically when you apply the log-level change. Confirm that the debug log-level has been applied to the prometheus-operator deployment in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep "log-level" Example output - --log-level=debug Debug level logging will show all calls made by the Prometheus Operator. Check that the prometheus-operator pod is running: USD oc -n openshift-user-workload-monitoring get pods Note If an unrecognized Prometheus Operator loglevel value is included in the config map, the prometheus-operator pod might not restart successfully. Review the debug logs to see if the Prometheus Operator is using the ServiceMonitor resource. Review the logs for other related errors. Additional resources Creating a user-defined workload monitoring config map See Specifying how a service is monitored for details on how to create a ServiceMonitor or PodMonitor resource See Accessing metrics targets in the Administrator perspective 14.2. Determining why Prometheus is consuming a lot of disk space Developers can create labels to define attributes for metrics in the form of key-value pairs. The number of potential key-value pairs corresponds to the number of possible values for an attribute. An attribute that has an unlimited number of potential values is called an unbound attribute. For example, a customer_id attribute is unbound because it has an infinite number of possible values. Every assigned key-value pair has a unique time series. The use of many unbound attributes in labels can result in an exponential increase in the number of time series created. This can impact Prometheus performance and can consume a lot of disk space. You can use the following measures when Prometheus consumes a lot of disk: Check the number of scrape samples that are being collected. Check the time series database (TSDB) status using the Prometheus HTTP API for more information about which labels are creating the most time series. Doing so requires cluster administrator privileges. Reduce the number of unique time series that are created by reducing the number of unbound attributes that are assigned to user-defined metrics. Note Using attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations. Enforce limits on the number of samples that can be scraped across user-defined projects. This requires cluster administrator privileges. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure In the Administrator perspective, navigate to Observe Metrics . Run the following Prometheus Query Language (PromQL) query in the Expression field. This returns the ten metrics that have the highest number of scrape samples: topk(10,count by (job)({__name__=~".+"})) Investigate the number of unbound label values assigned to metrics with higher than expected scrape sample counts. If the metrics relate to a user-defined project , review the metrics key-value pairs assigned to your workload. These are implemented through Prometheus client libraries at the application level. Try to limit the number of unbound attributes referenced in your labels. If the metrics relate to a core OpenShift Container Platform project , create a Red Hat support case on the Red Hat Customer Portal . Review the TSDB status using the Prometheus HTTP API by running the following commands as a cluster administrator: USD oc login -u <username> -p <password> USD host=USD(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath={.spec.host}) USD token=USD(oc whoami -t) USD curl -H "Authorization: Bearer USDtoken" -k "https://USDhost/api/v1/status/tsdb" Example output "status": "success", Additional resources See Setting a scrape sample limit for user-defined projects for details on how to set a scrape sample limit and create related alerting rules Submitting a support case
[ "oc -n ns1 get service prometheus-example-app -o yaml", "labels: app: prometheus-example-app", "oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml", "apiVersion: v1 kind: Service spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app", "oc -n openshift-user-workload-monitoring get pods", "NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m", "oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator", "level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg=\"skipping servicemonitor\" error=\"it accesses file system via bearer token file which Prometheus specification prohibits\" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug", "oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"", "- --log-level=debug", "oc -n openshift-user-workload-monitoring get pods", "topk(10,count by (job)({__name__=~\".+\"}))", "oc login -u <username> -p <password>", "host=USD(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath={.spec.host})", "token=USD(oc whoami -t)", "curl -H \"Authorization: Bearer USDtoken\" -k \"https://USDhost/api/v1/status/tsdb\"", "\"status\": \"success\"," ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/monitoring/troubleshooting-monitoring-issues
31.6. Setting Module Parameters
31.6. Setting Module Parameters Like the kernel itself, modules can also take parameters that change their behavior. Most of the time, the default ones work well, but occasionally it is necessary or desirable to set custom parameters for a module. Because parameters cannot be dynamically set for a module that is already loaded into a running kernel, there are two different methods for setting them. Load a kernel module by running the modprobe command along with a list of customized parameters on the command line. If the module is already loaded, you need to first unload all its dependencies and the module itself using the modprobe -r command. This method allows you to run a kernel module with specific settings without making the changes persistent. See Section 31.6.1, "Loading a Customized Module - Temporary Changes" for more information. Alternatively, specify a list of the customized parameters in an existing or newly-created file in the /etc/modprobe.d/ directory. This method ensures that the module customization is persistent by setting the specified parameters accordingly each time the module is loaded, such as after every reboot or modprobe command. See Section 31.6.2, "Loading a Customized Module - Persistent Changes" for more information. 31.6.1. Loading a Customized Module - Temporary Changes Sometimes it is useful or necessary to run a kernel module temporarily with specific settings. To load a kernel module with customized parameters for the current system session, or until the module is reloaded with different parameters, run modprobe in the following format as root: ~]# modprobe <module_name> [ parameter = value \ufeff ] where [ parameter = value \ufeff ] represents a list of customized parameters available to that module. When loading a module with custom parameters on the command line, be aware of the following: You can enter multiple parameters and values by separating them with spaces. Some module parameters expect a list of comma-separated values as their argument. When entering the list of values, do not insert a space after each comma, or modprobe will incorrectly interpret the values following spaces as additional parameters. The modprobe command silently succeeds with an exit status of 0 if it successfully loads the module, or the module is already loaded into the kernel. Thus, you must ensure that the module is not already loaded before attempting to load it with custom parameters. The modprobe command does not automatically reload the module, or alert you that it is already loaded. The following procedure illustrates the recommended steps to load a kernel module with custom parameters on the e1000e module, which is the network driver for Intel PRO/1000 network adapters, as an example: Procedure 31.1. Loading a Kernel Module with Custom Parameters Verify whether the module is not already loaded into the kernel by running the following command: Note that the output of the command in this example indicates that the e1000e module is already loaded into the kernel. It also shows that this module has one dependency, the ptp module. If the module is already loaded into the kernel, you must unload the module and all its dependencies before proceeding with the step. See Section 31.4, "Unloading a Module" for instructions on how to safely unload it. Load the module and list all custom parameters after the module name. For example, if you wanted to load the Intel PRO/1000 network driver with the interrupt throttle rate set to 3000 interrupts per second for the first, second and third instances of the driver, and Energy Efficient Ethernet (EEE) turned on [5] , you would run, as root: This example illustrates passing multiple values to a single parameter by separating them with commas and omitting any spaces between them. [5] Despite what the example might imply, Energy Efficient Ethernet is turned on by default in the e1000e driver.
[ "~]# lsmod|grep e1000e e1000e 236338 0 ptp 9614 1 e1000e", "~]# modprobe e1000e InterruptThrottleRate=3000,3000,3000 EEE=1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Setting_Module_Parameters
Assisted Installer for OpenShift Container Platform
Assisted Installer for OpenShift Container Platform Assisted Installer for OpenShift Container Platform 2023 Assisted Installer User Guide Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/assisted_installer_for_openshift_container_platform/index
24.4. Host Log Files
24.4. Host Log Files Log File Description /var/log/messages The log file used by libvirt . Use journalctl to view the log. You must be a member of the adm , systemd-journal , or wheel groups to view the log. /var/log/vdsm/spm-lock.log Log file detailing the host's ability to obtain a lease on the Storage Pool Manager role. The log details when the host has acquired, released, renewed, or failed to renew the lease. /var/log/vdsm/vdsm.log Log file for VDSM, the Manager's agent on the host(s). /tmp/ovirt-host-deploy- Date .log A host deployment log that is copied to the Manager as /var/log/ovirt-engine/host-deploy/ovirt- Date-Host-Correlation_ID .log after the host has been successfully deployed. /var/log/vdsm/import/import- UUID-Date .log Log file detailing virtual machine imports from a KVM host, a VMWare provider, or a RHEL 5 Xen host, including import failure information. UUID is the UUID of the virtual machine that was imported and Date is the date and time that the import began. /var/log/vdsm/supervdsm.log Logs VDSM tasks that were executed with superuser permissions. /var/log/vdsm/upgrade.log VDSM uses this log file during host upgrades to log configuration changes. /var/log/vdsm/mom.log Logs the activities of the VDSM's memory overcommitment manager.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/host_log_files
Chapter 2. Decision-authoring assets in Red Hat Decision Manager
Chapter 2. Decision-authoring assets in Red Hat Decision Manager Red Hat Decision Manager supports several assets that you can use to define business decisions for your decision service. Each decision-authoring asset has different advantages, and you might prefer to use one or a combination of multiple assets depending on your goals and needs. The following table highlights the main decision-authoring assets supported in Red Hat Decision Manager projects to help you decide or confirm the best method for defining decisions in your decision service. Table 2.1. Decision-authoring assets supported in Red Hat Decision Manager Asset Highlights Authoring tools Documentation Decision Model and Notation (DMN) models Are decision models based on a notation standard defined by the Object Management Group (OMG) Use graphical decision requirements diagrams (DRDs) that represent part or all of the overall decision requirements graph (DRG) to trace business decision flows Use an XML schema that allows the DMN models to be shared between DMN-compliant platforms Support Friendly Enough Expression Language (FEEL) to define decision logic in DMN decision tables and other DMN boxed expressions Are optimal for creating comprehensive, illustrative, and stable decision flows Business Central or other DMN-compliant editor Designing a decision service using DMN models Guided decision tables Are tables of rules that you create in a UI-based table designer in Business Central Are a wizard-led alternative to spreadsheet decision tables Provide fields and options for acceptable input Support template keys and values for creating rule templates Support hit policies, real-time validation, and other additional features not supported in other assets Are optimal for creating rules in a controlled tabular format to minimize compilation errors Business Central Designing a decision service using guided decision tables Spreadsheet decision tables Are XLS or XLSX spreadsheet decision tables that you can upload into Business Central Support template keys and values for creating rule templates Are optimal for creating rules in decision tables already managed outside of Business Central Have strict syntax requirements for rules to be compiled properly when uploaded Spreadsheet editor Designing a decision service using spreadsheet decision tables Guided rules Are individual rules that you create in a UI-based rule designer in Business Central Provide fields and options for acceptable input Are optimal for creating single rules in a controlled format to minimize compilation errors Business Central Designing a decision service using guided rules Guided rule templates Are reusable rule structures that you create in a UI-based template designer in Business Central Provide fields and options for acceptable input Support template keys and values for creating rule templates (fundamental to the purpose of this asset) Are optimal for creating many rules with the same rule structure but with different defined field values Business Central Designing a decision service using guided rule templates DRL rules Are individual rules that you define directly in .drl text files Provide the most flexibility for defining rules and other technicalities of rule behavior Can be created in certain standalone environments and integrated with Red Hat Decision Manager Are optimal for creating rules that require advanced DRL options Have strict syntax requirements for rules to be compiled properly Business Central or integrated development environment (IDE) Designing a decision service using DRL rules Predictive Model Markup Language (PMML) models Are predictive data-analytic models based on a notation standard defined by the Data Mining Group (DMG) Use an XML schema that allows the PMML models to be shared between PMML-compliant platforms Support Regression, Scorecard, Tree, Mining, and other model types Can be included with a standalone Red Hat Decision Manager project or imported into a project in Business Central Are optimal for incorporating predictive data into decision services in Red Hat Decision Manager PMML or XML editor Designing a decision service using PMML models When you define business decisions, you can also consider using Red Hat build of Kogito for your cloud-native decision services. For more information about getting started with Red Hat build of Kogito microservices, see Getting started with Red Hat build of Kogito in Red Hat Decision Manager .
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/designing_your_decision_management_architecture_for_red_hat_decision_manager/decision-authoring-assets-ref_decision-management-architecture
2.3. Configuring ACPI For Use with Integrated Fence Devices
2.3. Configuring ACPI For Use with Integrated Fence Devices If your cluster uses integrated fence devices, you must configure ACPI (Advanced Configuration and Power Interface) to ensure immediate and complete fencing. Note For the most current information about integrated fence devices supported by Red Hat Cluster Suite, refer to http://www.redhat.com/cluster_suite/hardware/ . If a cluster node is configured to be fenced by an integrated fence device, disable ACPI Soft-Off for that node. Disabling ACPI Soft-Off allows an integrated fence device to turn off a node immediately and completely rather than attempting a clean shutdown (for example, shutdown -h now ). Otherwise, if ACPI Soft-Off is enabled, an integrated fence device can take four or more seconds to turn off a node (refer to note that follows). In addition, if ACPI Soft-Off is enabled and a node panics or freezes during shutdown, an integrated fence device may not be able to turn off the node. Under those circumstances, fencing is delayed or unsuccessful. Consequently, when a node is fenced with an integrated fence device and ACPI Soft-Off is enabled, a cluster recovers slowly or requires administrative intervention to recover. Note The amount of time required to fence a node depends on the integrated fence device used. Some integrated fence devices perform the equivalent of pressing and holding the power button; therefore, the fence device turns off the node in four to five seconds. Other integrated fence devices perform the equivalent of pressing the power button momentarily, relying on the operating system to turn off the node; therefore, the fence device turns off the node in a time span much longer than four to five seconds. To disable ACPI Soft-Off, use chkconfig management and verify that the node turns off immediately when fenced. The preferred way to disable ACPI Soft-Off is with chkconfig management: however, if that method is not satisfactory for your cluster, you can disable ACPI Soft-Off with one of the following alternate methods: Changing the BIOS setting to "instant-off" or an equivalent setting that turns off the node without delay Note Disabling ACPI Soft-Off with the BIOS may not be possible with some computers. Appending acpi=off to the kernel boot command line of the /boot/grub/grub.conf file Important This method completely disables ACPI; some computers do not boot correctly if ACPI is completely disabled. Use this method only if the other methods are not effective for your cluster. The following sections provide procedures for the preferred method and alternate methods of disabling ACPI Soft-Off: Section 2.3.1, "Disabling ACPI Soft-Off with chkconfig Management" - Preferred method Section 2.3.2, "Disabling ACPI Soft-Off with the BIOS" - First alternate method Section 2.3.3, "Disabling ACPI Completely in the grub.conf File" - Second alternate method 2.3.1. Disabling ACPI Soft-Off with chkconfig Management You can use chkconfig management to disable ACPI Soft-Off either by removing the ACPI daemon ( acpid ) from chkconfig management or by turning off acpid . Note This is the preferred method of disabling ACPI Soft-Off. Disable ACPI Soft-Off with chkconfig management at each cluster node as follows: Run either of the following commands: chkconfig --del acpid - This command removes acpid from chkconfig management. - OR - chkconfig --level 2345 acpid off - This command turns off acpid . Reboot the node. When the cluster is configured and running, verify that the node turns off immediately when fenced. Note You can fence the node with the fence_node command or Conga .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-acpi-CA
Chapter 18. PersistentClaimStorage schema reference
Chapter 18. PersistentClaimStorage schema reference Used in: JbodStorage , KafkaClusterSpec , KafkaNodePoolSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes use of the PersistentClaimStorage type from EphemeralStorage . It must have the value persistent-claim for the type PersistentClaimStorage . Property Property type Description type string Must be persistent-claim . size string When type=persistent-claim , defines the size of the persistent volume claim, such as 100Gi. Mandatory when type=persistent-claim . selector map Specifies a specific persistent volume to use. It contains key:value pairs representing labels for selecting such a volume. deleteClaim boolean Specifies if the persistent volume claim has to be deleted when the cluster is un-deployed. class string The storage class to use for dynamic volume allocation. id integer Storage identification number. It is mandatory only for storage volumes defined in a storage of type 'jbod'. overrides PersistentClaimStorageOverride array Overrides for individual brokers. The overrides field allows to specify a different configuration for different brokers.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-PersistentClaimStorage-reference
8.236. xmlrpc-c
8.236. xmlrpc-c 8.236.1. RHBA-2013:1254 - xmlrpc-c bug fix update Updated xmlrpc-c packages that fix one bug are now available. XML-RPC is a remote procedure call (RPC) protocol that uses XML to encode its calls and HTTP as a transport mechanism. Bug Fix BZ# 809819 Previously, features listed when the "--help" command was run were not consistent with the list when the "--features" command was run. Also, running the reproducer script resulted in "Unrecognized token" errors. With this update, listed features are consistent, and "Unrecognized token" errors are no longer displayed. Users of xmlrpc-c are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/xmlrpc-c
Chapter 2. Release notes
Chapter 2. Release notes 2.1. OpenShift Virtualization release notes 2.1.1. Providing documentation feedback To report an error or to improve our documentation, log in to your Red Hat Jira account and submit a Jira issue . 2.1.2. About Red Hat OpenShift Virtualization With Red Hat OpenShift Virtualization, you can bring traditional virtual machines (VMs) into OpenShift Container Platform and run them alongside containers. In OpenShift Virtualization, VMs are native Kubernetes objects that you can manage by using the OpenShift Container Platform web console or the command line. OpenShift Virtualization is represented by the icon. You can use OpenShift Virtualization with either the OVN-Kubernetes or the OpenShiftSDN default Container Network Interface (CNI) network provider. Learn more about what you can do with OpenShift Virtualization . Learn more about OpenShift Virtualization architecture and deployments . Prepare your cluster for OpenShift Virtualization. 2.1.2.1. OpenShift Virtualization supported cluster version OpenShift Virtualization 4.16 is supported for use on OpenShift Container Platform 4.16 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform. 2.1.2.2. Supported guest operating systems To view the supported guest operating systems for OpenShift Virtualization, see Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization, OpenShift Virtualization and Red Hat Enterprise Linux with KVM . 2.1.2.3. Microsoft Windows SVVP certification OpenShift Virtualization is certified in Microsoft's Windows Server Virtualization Validation Program (SVVP) to run Windows Server workloads. The SVVP certification applies to: Red Hat Enterprise Linux CoreOS workers. In the Microsoft SVVP Catalog, they are named Red Hat OpenShift Container Platform 4 on RHEL CoreOS 9 . Intel and AMD CPUs. 2.1.3. Quick starts Quick start tours are available for several OpenShift Virtualization features. To view the tours, click the Help icon ? in the menu bar on the header of the OpenShift Container Platform web console and then select Quick Starts . You can filter the available tours by entering the keyword virtualization in the Filter field. 2.1.4. New and changed features This release adds new features and enhancements related to the following components and concepts: 2.1.4.1. Installation and update After upgrading to OpenShift Virtualization 4.16, data volumes that were previously removed through garbage collection might be recreated. This behavior is expected. You can ignore the recreated data volumes because data volume garbage collection is now disabled. 2.1.4.2. Virtualization Windows 10 VMs now boot using UEFI with TPM. Enabling the AutoResourceLimits feature gate automatically manages CPU and memory limits of a VM. The KubeVirt Tekton tasks are now shipped as a part of the OpenShift Container Platform Pipelines catalog . 2.1.4.3. Networking You can now access a VM that is connected to the default internal pod network on a stable fully qualified domain name (FQDN) by using headless services. 2.1.4.4. Web console Hot plugging virtual CPUs (vCPUs) into virtual machines is now generally available. If a vCPU cannot be hot plugged, the condition RestartRequired is applied to the VM. You can view this condition in the Diagnostics tab of the web console. You can now select sysprep options when you create a Microsoft Windows VM from an instance type. Previously, you had to set the sysprep options by customizing the VM after its creation. 2.1.4.5. Monitoring As an administrator, you can now expose a limited set of host and virtual machine (VM) metrics to a guest VM through a virtio-serial port for OpenShift Virtualization by enabling a downwardMetrics feature gate and configuring a downwardMetrics device. Users retrieve the metrics by using the vm-dump-metrics tool or from the command line. On Red Hat Enterprise Linux (RHEL) 9, use the command line to view downward metrics. The vm-dump-metrics tool is not supported on the Red Hat Enterprise Linux (RHEL) 9 platform. 2.1.4.6. Notable technical changes VMs require a minimum of 1 GiB of allocated memory to enable memory hotplug. If a VM has less than 1 GiB of allocated memory, then memory hotplug is disabled. Runbooks for OpenShift Virtualization alerts are now maintained only in the openshift/runbooks git repository . Links to the runbook source files are now available in place of the removed runbooks. 2.1.5. Deprecated and removed features 2.1.5.1. Deprecated features Deprecated features are included in the current release and remain supported. However, deprecated features will be removed in a future release and are not recommended for new deployments. The tekton-tasks-operator is deprecated and Tekton tasks and example pipelines are now deployed by the ssp-operator . The copy-template , modify-vm-template , and create-vm-from-template tasks are deprecated. Support for Windows Server 2012 R2 templates is deprecated. The alerts KubeVirtComponentExceedsRequestedMemory and KubeVirtComponentExceedsRequestedCPU are deprecated. You can safely silence them. 2.1.5.2. Removed features Removed features are not supported in the current release. CentOS 7 and CentOS Stream 8 are now in the End of Life phase. As a consequence, the container images for these operating systems have been removed from OpenShift Virtualization and are no longer community supported . 2.1.6. Technology Preview features Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features: Technology Preview Features Support Scope You can now configure a VM eviction strategy for the entire cluster . You can now enable nested virtualization on OpenShift Virtualization hosts . Cluster admins can now enable CPU resource limits on a namespace in the OpenShift Container Platform web console under Overview Settings Preview features . Cluster admins can now use the wasp-agent tool to configure a higher VM workload density in their clusters by overcommitting the amount of memory, in RAM, and assigning swap resources to VM workloads. OpenShift Virtualization now supports compatibility with Red Hat OpenShift Data Foundation (ODF) Regional Disaster Recovery. 2.1.7. Known issues Monitoring The Pod Disruption Budget (PDB) prevents pod disruptions for migratable virtual machine images. If the PDB detects pod disruption, then openshift-monitoring sends a PodDisruptionBudgetAtLimit alert every 60 minutes for virtual machine images that use the LiveMigrate eviction strategy. ( CNV-33834 ) As a workaround, silence alerts . Nodes Uninstalling OpenShift Virtualization does not remove the feature.node.kubevirt.io node labels created by OpenShift Virtualization. You must remove the labels manually. ( CNV-38543 ) In a heterogeneous cluster with different compute nodes, virtual machines that have HyperV reenlightenment enabled cannot be scheduled on nodes that do not support timestamp-counter scaling (TSC) or have the appropriate TSC frequency. ( BZ#2151169 ) Storage If you use Portworx as your storage solution on AWS and create a VM disk image, the created image might be smaller than expected due to the filesystem overhead being accounted for twice. ( CNV-40217 ) As a workaround, you can manually expand the persistent volume claim (PVC) to increase the available space after the initial provisioning process completes. In some instances, multiple virtual machines can mount the same PVC in read-write mode, which might result in data corruption. ( CNV-13500 ) As a workaround, avoid using a single PVC in read-write mode with multiple VMs. If you clone more than 100 VMs using the csi-clone cloning strategy, then the Ceph CSI might not purge the clones. Manually deleting the clones might also fail. ( CNV-23501 ) As a workaround, you can restart the ceph-mgr to purge the VM clones. Virtualization VM migrations might fail on clusters with mixed CPU types. ( CNV-43195 ) As a workaround, you can set the CPU model at the VM spec level or at the cluster level . When adding a virtual Trusted Platform Module (vTPM) device to a Windows VM, the BitLocker Drive Encryption system check passes even if the vTPM device is not persistent. This is because a vTPM device that is not persistent stores and recovers encryption keys using ephemeral storage for the lifetime of the virt-launcher pod. When the VM migrates or is shut down and restarts, the vTPM data is lost. ( CNV-36448 ) OpenShift Virtualization links a service account token in use by a pod to that specific pod. OpenShift Virtualization implements a service account volume by creating a disk image that contains a token. If you migrate a VM, then the service account volume becomes invalid. ( CNV-33835 ) As a workaround, use user accounts rather than service accounts because user account tokens are not bound to a specific pod. With the release of the RHSA-2023:3722 advisory, the TLS Extended Master Secret (EMS) extension ( RFC 7627 ) is mandatory for TLS 1.2 connections on FIPS-enabled Red Hat Enterprise Linux (RHEL) 9 systems. This is in accordance with FIPS-140-3 requirements. TLS 1.3 is not affected. Legacy OpenSSL clients that do not support EMS or TLS 1.3 now cannot connect to FIPS servers running on RHEL 9. Similarly, RHEL 9 clients in FIPS mode cannot connect to servers that only support TLS 1.2 without EMS. This in practice means that these clients cannot connect to servers on RHEL 6, RHEL 7 and non-RHEL legacy operating systems. This is because the legacy 1.0.x versions of OpenSSL do not support EMS or TLS 1.3. For more information, see TLS Extension "Extended Master Secret" enforced with Red Hat Enterprise Linux 9.2 . As a workaround, update legacy OpenSSL clients to a version that supports TLS 1.3 and configure OpenShift Virtualization to use TLS 1.3, with the Modern TLS security profile type, for FIPS mode. Web console When you first deploy an OpenShift Container Platform cluster, creating VMs from templates or instance types by using the web console, fails if you do not have cluster-admin permissions. As a workaround, the cluster administrator must first create a config map to enable other users to use templates and instance types to create VMs. ( CNV-38284 ) When you create a persistent volume claim (PVC) by selecting With Data upload form from the Create PersistentVolumeClaim list in the web console, uploading data to the PVC by using the Upload Data field fails. ( CNV-37607 )
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/virtualization/release-notes
Data Grid Cross-Site Replication
Data Grid Cross-Site Replication Red Hat Data Grid 8.5 Back up data between Data Grid clusters Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_cross-site_replication/index
Chapter 18. Object storage
Chapter 18. Object storage The Object Storage (swift) service stores and retrieves data over HTTP. Objects (blobs of data) are stored in an organizational hierarchy that can be configured to offer anonymous read-only access, ACL defined access, or even temporary access. Swift supports multiple token-based authentication mechanisms implemented through middleware. Applications store and retrieve data in Object Storage using an industry-standard HTTP RESTful API. The back end swift components follow the same RESTful model, although some APIs (such as those managing durability) are kept private to the cluster. The components of swift fall into the following primary groups: Proxy services Auth services Storage services Account service Container service Object service Note An Object Storage installation does not have to be internet-facing and could also be a private cloud with the public switch a part of the organization's internal network infrastructure. 18.1. Network security Security hardening for swift begins with securing the networking component. See the networking chapter for more information. For high availability, the rsync protocol is used to replicate data between storage service nodes. In addition, the proxy service communicates with the storage service when relaying data between the client end-point and the cloud environment. Note Swift does not use encryption or authentication with inter-node communications. This is because swift uses the native rsync protocol for performance reasons, and does not use SSH for rsync communications.This is why you see a private switch or private network ([V]LAN) in the architecture diagrams. This data zone should be separate from other OpenStack data networks as well. Note Use a private (V)LAN network segment for your storage nodes in the data zone. This requires that the proxy nodes have dual interfaces (physical or virtual): One interface as a public interface for consumers to reach. Another interface as a private interface with access to the storage nodes. The following figure demonstrates one possible network architecture, using the Object Storage network architecture with a management node (OSAM): 18.2. Run services as non-root user It is recommend that you configure swift to run under a non-root ( UID 0 ) service account. One recommendation is the username swift with the primary group swift , as deployed by director. Object Storage services include, for example, proxy-server , container-server , account-server . 18.3. File permissions The /var/lib/config-data/puppet-generated/swift/etc/swift/ directory contains information about the ring topology and environment configuration. The following permissions are recommended: This restriction only allows root to modify configuration files, while still allowing the services to read them, due to their membership in the swift group. 18.4. Securing storage services The following are the default listening ports for the various storage services: Account service - TCP/6002 Container service - TCP/6001 Object Service - TCP/6000 Rsync - TCP/873 Note If ssync is used instead of rsync, the object service port is used for maintaining durability. Note Authentication does not occur at the storage nodes. If you are able to connect to a storage node on one of these ports, you can access or modify data without authentication. To help mitigate this issue, you should follow the recommendations given previously about using a private storage network. 18.5. Object Storage account terminology A swift account is not a user account or credential. The following distinctions exist: Swift account - A collection of containers (not user accounts or authentication). The authentication system you use will determine which users are associated with the account and how they might access it. Swift containers - A collection of objects. Metadata on the container is available for ACLs. The usage of ACLs is dependent on the authentication system used. Swift objects - The actual data objects. ACLs at the object level are also available with metadata, and are dependent on the authentication system used. At each level, you have ACLs that control user access; ACLs are interpreted based on the authentication system in use. The most common type of authentication provider is the Identity Service (keystone); custom authentication providers are also available. 18.6. Securing proxy services A proxy node should have at least two interfaces (physical or virtual): one public and one private. You can use firewalls or service binding to help protect the public interface. The public-facing service is an HTTP web server that processes end-point client requests, authenticates them, and performs the appropriate action. The private interface does not require any listening services, but is instead used to establish outgoing connections to storage nodes on the private storage network. 18.7. HTTP listening port Director configures the web services to run under a non-root (no UID 0) user. Using port numbers higher than 1024 help avoid running any part of the web container as root. Normally, clients that use the HTTP REST API (and perform automatic authentication) will retrieve the full REST API URL they require from the authentication response. The OpenStack REST API allows a client to authenticate to one URL and then be redirected to use a completely different URL for the actual service. For example, a client can authenticate to https://identity.cloud.example.org:55443/v1/auth and get a response with their authentication key and storage URL (the URL of the proxy nodes or load balancer) of https://swift.cloud.example.org:44443/v1/AUTH_8980 . 18.8. Load balancer If the option of using Apache is not feasible, or for performance you wish to offload your TLS work, you might employ a dedicated network device load balancer. This is a common way to provide redundancy and load balancing when using multiple proxy nodes. If you choose to offload your TLS, ensure that the network link between the load balancer and your proxy nodes are on a private (V)LAN segment such that other nodes on the network (possibly compromised) cannot wiretap (sniff) the unencrypted traffic. If such a breach was to occur, the attacker could gain access to endpoint client or cloud administrator credentials and access the cloud data. The authentication service you use will determine how you configure a different URL in the responses to endpoint clients, allowing them to use your load balancer instead of an individual proxy node. 18.9. Object Storage authentication Object Storage (swift) uses a WSGI model to provide for a middleware capability that not only provides general extensibility, but is also used for authentication of endpoint clients. The authentication provider defines what roles and user types exist. Some use traditional username and password credentials, while others might leverage API key tokens or even client-side x.509 certificates. Custom providers can be integrated using custom middleware. Object Storage comes with two authentication middleware modules by default, either of which can be used as sample code for developing a custom authentication middleware. 18.10. Encrypt at-rest swift objects Swift can integrate with Barbican to transparently encrypt and decrypt your stored (at-rest) objects. At-rest encryption is distinct from in-transit encryption, and refers to the objects being encrypted while being stored on disk. Swift performs these encryption tasks transparently, with the objects being automatically encrypted when uploaded to swift, then automatically decrypted when served to a user. This encryption and decryption is done using the same (symmetric) key, which is stored in Barbican. Additional resources Manage secrets with OpenStack Key Manager 18.11. Additional items In /var/lib/config-data/puppet-generated/swift/etc/swift/swift.conf on every node, there is a swift_hash_path_prefix setting and a swift_hash_path_suffix setting. These are provided to reduce the chance of hash collisions for objects being stored and avert one user overwriting the data of another user. This value should be initially set with a cryptographically secure random number generator and consistent across all nodes. Ensure that it is protected with proper ACLs and that you have a backup copy to avoid data loss.
[ "chown -R root:swift /var/lib/config-data/puppet-generated/swift/etc/swift/* find /var/lib/config-data/puppet-generated/swift/etc/swift/ -type f -exec chmod 640 {} \\; find /var/lib/config-data/puppet-generated/swift/etc/swift/ -type d -exec chmod 750 {} \\;" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/hardening_red_hat_openstack_platform/assembly_object-storage
Chapter 18. Post-installation security hardening
Chapter 18. Post-installation security hardening RHEL is designed with robust security features enabled by default. However, you can enhance its security further through additional hardening measures. For more information about: Installing security updates and displaying additional details about the updates to keep your RHEL systems secured against newly discovered threats and vulnerabilities, see Managing and monitoring security updates . Processes and practices for securing RHEL servers and workstations against local and remote intrusion, exploitation, and malicious activity, see Security hardening . Control how users and processes interact with the files on the system or control which users can perform which actions by mapping them to specific SELinux confined users, see Using SELinux . Tools and techniques to improve the security of your networks and lower the risks of data breaches and intrusions, see Securing networks . Packet filters, such as firewalls, that use rules to control incoming, outgoing, and forwarded network traffic, see Configuring firewalls and packet filters .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/automatically_installing_rhel/post-installation-security-hardening_rhel-installer
Chapter 6. Managing applications with MTA
Chapter 6. Managing applications with MTA You can use the Migration Toolkit for Applications (MTA) user interface to perform the following tasks: Add applications Assign application credentials Import a list of applications Download a CSV template for importing application lists Create application migration waves Create Jira issues for migration waves MTA user interface applications have the following attributes: Name (free text) Description (optional, free text) Business service (optional, chosen from a list) Tags (optional, chosen from a list) Owner (optional, chosen from a list) Contributors (optional, chosen from a list) Source code (a path entered by the user) Binary (a path entered by the user) 6.1. Adding a new application You can add a new application to the Application Inventory for subsequent assessment and analysis. Tip Before creating an application, set up business services, check tags and tag categories, and create additions as needed. Prerequisites You are logged in to an MTA server. Procedure In the Migration view, click Application Inventory . Click Create new . Under Basic information , enter the following fields: Name : A unique name for the new application. Description : A short description of the application (optional). Business service : A purpose of the application (optional). Manual tags : Software tags that characterize the application (optional, one or more). Owner : A registered software owner from the drop-down list (optional). Contributors : Contributors from the drop-down list (optional, one or more). Comments : Relevant comments on the application (optional). Click Source Code and enter the following fields: Repository type : Git or Subversion . Source repository : A URL of the repository where the software code is saved. For Subversion: this must be either the URL to the root of the repository or a fully qualified URL which (optionally) includes the branch and nested directory. When fully qualified, the Branch and Root path must be blank. Branch : An application code branch in the repository (optional). For Git: this may be any reference; commit-hash , branch or tag . For Subversion: this may be a fully qualified path to a branch or tag, for example, branches/stable or tags/stable . This must be blank when the Source repository URL includes the branch. Root path : A root path inside the repository for the target application (optional). For Subversion: this must be blank when the Source Repository URL includes the root path. NOTE: If you enter any value in either the Branch or Root path fields, the Source repository field becomes mandatory. Optional: Click Binary and enter the following fields: Group : The Maven group for the application artifact. Artifact : The Maven artifact for the application. Version : A software version of the application. Packaging : The packaging for the application artifact, for example, JAR , WAR , or EAR . NOTE: If you enter any value in any of the Binary section fields, all fields automatically become mandatory. Click Create . The new application appears in the list of defined applications. Automated Tasks After adding a new application to the Application Inventory , you can set your cursor to hover over the application name to see the automated tasks spawned by adding the application. The language discovery task identifies the programming languages in the application. The technology discovery task identifies specific technologies in the application. The tasks automatically add appropriate tags to the application, reducing the effort involved in manually assigning tags to the application. After these tasks are complete, the number of tags added to the application will appear under the Tags column. To view the tags: Click on the application's row entry. A side pane opens. Click the Tags tab. The tags attached to the application are displayed. You can add additional tags manually as needed. When MTA analyzes the application, it can add additional tags to the application automatically. 6.2. Editing an application You can edit an existing application in the Application Inventory and re-run an assessment or analysis for this application. Prerequisites You are logged in to an MTA server. Procedure In the Migration view, click Application Inventory . Select the Migration working mode. Click Application Inventory in the left menu bar. A list of available applications appears in the main pane. Click Edit ( ) to open the application settings. Review the application settings. For a list of application settings, see Adding an application . If you changed any application settings, click Save . Note After editing an application, MTA re-spawns the language discovery and technology discovery tasks. 6.3. Assigning credentials to an application You can assign credentials to one or more applications. Procedure In the Migration view, click Application inventory . Click the Options menu ( ) to the right of Analyze and select Manage credentials . Select one credential from the Source credentials list and from the Maven settings list. Click Save . 6.4. Importing a list of applications You can import a .csv file that contains a list of applications and their attributes to the Migration Toolkit for Applications (MTA) user interface. Note Importing a list of applications does not overwrite any of the existing applications. Procedure Review the import file to ensure it contains all the required information in the required format. In the Migration view, click Application Inventory . Click the Options menu ( ). Click Import . Select the desired file and click Open . Optional: Select Enable automatic creation of missing entities . This option is selected by default. Verify that the import has completed and check the number of accepted or rejected rows. Review the imported applications by clicking the arrow to the left of the checkbox. Important Accepted rows might not match the number of applications in the Application inventory list because some rows are dependencies. To verify, check the Record Type column of the CSV file for applications defined as 1 and dependencies defined as 2 . 6.5. Downloading a CSV template You can download a CSV template for importing application lists by using the Migration Toolkit for Applications (MTA) user interface. Procedure In the Migration view, click Application inventory . Click the Options menu ( ) to the right of Review . Click Manage imports to open the Application imports page. Click the Options menu ( ) to the right of Import . Click Download CSV template . 6.6. Creating a migration wave A migration wave is a group applications that you can migrate on a given schedule. You can track each migration by exporting a list of the wave's applications to the Jira issue management system. This automatically creates a separate Jira issue for each application of the migration wave. Procedure In the Migration view, click Migration waves . Click Create new . The New migration wave window opens. Enter the following information: Name (optional). If the name is not given, you can use the start and end dates to identify migration waves. Potential start date . This date must be later than the current date. Potential end date . This date must be later than the start date. Stakeholders (optional) Stakeholder groups (optional) Click Create . The new migration wave appears in the list of existing migration waves. To assign applications to the migration wave, click the Options menu ( ) to the right of the migration wave and select Manage applications . The Manage applications window opens that displays the list of applications that are not assigned to any other migration wave. Select the checkboxes of the applications that you want to assign to the migration wave. Click Save . Note The owner and the contributors of each application associated with the migration wave are automatically added to the migration wave's list of stakeholders. Optional: To update a migration wave, select Update from the migration wave's Options menu ( ). The Update migration wave window opens. 6.7. Creating Jira issues for a migration wave You can use a migration wave to create Jira issues automatically for each application assigned to the migration wave. A separate Jira issue is created for each application associated with the migration wave. The following fields of each issue are filled in automatically: Title: Migrate <application name> Reporter: Username of the token owner. Description: Created by Konveyor Note You cannot delete an application if it is linked to a Jira ticket or is associated with a migration wave. To unlink the application from the Jira ticket, click the Unlink from Jira icon in the details view of the application or in the details view of a migration wave. Prerequisites You configured Jira connection. For more information, see Creating and configuring a Jira connection . Procedure In the Migration view, click Migration waves . Click the Options menu ( ) to the right of the migration wave for which you want to create Jira issues and select Export to Issue Manager . The Export to Issue Manager window opens. Select the Jira Cloud or Jira Server/Datacenter instance type. Select the instance, project, and issue type from the lists. Click Export . The status of the migration wave on the Migration waves page changes to Issues Created . Optional: To see the status of each individual application of a migration wave, click the Status column. Optional: To see if any particular application is associated with a migration wave, open the application's Details tab on the Application inventory page.
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/user_interface_guide/working-with-applications-in-the-ui
Clusters
Clusters Red Hat Advanced Cluster Management for Kubernetes 2.12 Cluster management
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/clusters/index
Chapter 4. Running the playbook
Chapter 4. Running the playbook After you define variable settings, you can run the playbook to begin the automated installation process. You can run a playbook by using the ansible-playbook command on the control node or by using the Red Hat Ansible automation controller . The JBoss Web Server collection then handles all installation and deployment tasks automatically. Note The following procedure assumes that you have created and updated a custom playbook. Prerequisites You have enabled an automated deployment of JBoss Web Server . You are familar with general Ansible concepts and creating Ansible playbooks. For more information, see the Ansible documentation . Your playbook includes an appropriate link to the location where you have defined your variables. For example: The preceding example assumes that you have defined variables in a vars.yml file. Replace <path_to_vars_file> with the appropriate path. Your playbook also specifies the redhat.jws.jws role. For example: Note The redhat.jws.jws role is already preconfigured with become: true directives, which activate user privilege escalation for performing any automated tasks that require root privileges on your target hosts. Red Hat Enterprise Linux (RHEL) version 8 or 9 is already installed on your target hosts. Procedure Perform either of the following steps: On your Ansible control node, enter the following command: In the preceding command, replace <playbook_name> with the name you have assigned to your playbook. The preceding command assumes that your user account supports passwordless authentication. Note If your user account requires password authentication, you can run the preceding command with the --ask-sudo-pass option and specify the required password when prompted. For example: USD ansible-playbook <playbook_name> .yml --ask-sudo-pass Use the Red Hat Ansible automation controller to run your playbook. For more information about getting started with the automation controller, see the Red Hat Ansible Automation Platform documentation page.
[ "--- [...] vars_files: - <path_to_vars_file> /vars.yml [...]", "--- [...] roles: - redhat.jws.jws [...]", "ansible-playbook <playbook_name> .yml" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/installing_jboss_web_server_by_using_the_red_hat_ansible_certified_content_collection/run_playbook
Chapter 9. Visualizing logs
Chapter 9. Visualizing logs 9.1. About log visualization You can visualize your log data in the OpenShift Container Platform web console, or the Kibana web console, depending on your deployed log storage solution. The Kibana console can be used with ElasticSearch log stores, and the OpenShift Container Platform web console can be used with the ElasticSearch log store or the LokiStack. Note The Kibana web console is now deprecated is planned to be removed in a future logging release. 9.1.1. Configuring the log visualizer You can configure which log visualizer type your logging uses by modifying the ClusterLogging custom resource (CR). Prerequisites You have administrator permissions. You have installed the OpenShift CLI ( oc ). You have installed the Red Hat OpenShift Logging Operator. You have created a ClusterLogging CR. Important If you want to use the OpenShift Container Platform web console for visualization, you must enable the logging Console Plugin. See the documentation about "Log visualization with the web console". Procedure Modify the ClusterLogging CR visualization spec: ClusterLogging CR example apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: # ... spec: # ... visualization: type: <visualizer_type> 1 kibana: 2 resources: {} nodeSelector: {} proxy: {} replicas: {} tolerations: {} ocpConsole: 3 logsLimit: {} timeout: {} # ... 1 The type of visualizer you want to use for your logging. This can be either kibana or ocp-console . The Kibana console is only compatible with deployments that use Elasticsearch log storage, while the OpenShift Container Platform console is only compatible with LokiStack deployments. 2 Optional configurations for the Kibana console. 3 Optional configurations for the OpenShift Container Platform web console. Apply the ClusterLogging CR by running the following command: USD oc apply -f <filename>.yaml 9.1.2. Viewing logs for a resource Resource logs are a default feature that provides limited log viewing capability. You can view the logs for various resources, such as builds, deployments, and pods by using the OpenShift CLI ( oc ) and the web console. Tip To enhance your log retrieving and viewing experience, install the logging. The logging aggregates all the logs from your OpenShift Container Platform cluster, such as node system audit logs, application container logs, and infrastructure logs, into a dedicated log store. You can then query, discover, and visualize your log data through the Kibana console or the OpenShift Container Platform web console. Resource logs do not access the logging log store. 9.1.2.1. Viewing resource logs You can view the log for various resources in the OpenShift CLI ( oc ) and web console. Logs read from the tail, or end, of the log. Prerequisites Access to the OpenShift CLI ( oc ). Procedure (UI) In the OpenShift Container Platform console, navigate to Workloads Pods or navigate to the pod through the resource you want to investigate. Note Some resources, such as builds, do not have pods to query directly. In such instances, you can locate the Logs link on the Details page for the resource. Select a project from the drop-down menu. Click the name of the pod you want to investigate. Click Logs . Procedure (CLI) View the log for a specific pod: USD oc logs -f <pod_name> -c <container_name> where: -f Optional: Specifies that the output follows what is being written into the logs. <pod_name> Specifies the name of the pod. <container_name> Optional: Specifies the name of a container. When a pod has more than one container, you must specify the container name. For example: USD oc logs ruby-58cd97df55-mww7r USD oc logs -f ruby-57f7f4855b-znl92 -c ruby The contents of log files are printed out. View the log for a specific resource: USD oc logs <object_type>/<resource_name> 1 1 Specifies the resource type and name. For example: USD oc logs deployment/ruby The contents of log files are printed out. 9.2. Log visualization with the web console You can use the OpenShift Container Platform web console to visualize log data by configuring the logging Console Plugin. Options for configuration are available during installation of logging on the web console. If you have already installed logging and want to configure the plugin, use one of the following procedures. 9.2.1. Enabling the logging Console Plugin after you have installed the Red Hat OpenShift Logging Operator You can enable the logging Console Plugin as part of the Red Hat OpenShift Logging Operator installation, but you can also enable the plugin if you have already installed the Red Hat OpenShift Logging Operator with the plugin disabled. Prerequisites You have administrator permissions. You have installed the Red Hat OpenShift Logging Operator and selected Disabled for the Console plugin . You have access to the OpenShift Container Platform web console. Procedure In the OpenShift Container Platform web console Administrator perspective, navigate to Operators Installed Operators . Click Red Hat OpenShift Logging . This takes you to the Operator Details page. In the Details page, click Disabled for the Console plugin option. In the Console plugin enablement dialog, select Enable . Click Save . Verify that the Console plugin option now shows Enabled . The web console displays a pop-up window when changes have been applied. The window prompts you to reload the web console. Refresh the browser when you see the pop-up window to apply the changes. 9.2.2. Configuring the logging Console Plugin when you have the Elasticsearch log store and LokiStack installed In logging version 5.8 and later, if the Elasticsearch log store is your default log store but you have also installed the LokiStack, you can enable the logging Console Plugin by using the following procedure. Prerequisites You have administrator permissions. You have installed the Red Hat OpenShift Logging Operator, the OpenShift Elasticsearch Operator, and the Loki Operator. You have installed the OpenShift CLI ( oc ). You have created a ClusterLogging custom resource (CR). Procedure Ensure that the logging Console Plugin is enabled by running the following command: USD oc get consoles.operator.openshift.io cluster -o yaml |grep logging-view-plugin \ || oc patch consoles.operator.openshift.io cluster --type=merge \ --patch '{ "spec": { "plugins": ["logging-view-plugin"]}}' Add the .metadata.annotations.logging.openshift.io/ocp-console-migration-target: lokistack-dev annotation to the ClusterLogging CR, by running the following command: USD oc patch clusterlogging instance --type=merge --patch \ '{ "metadata": { "annotations": { "logging.openshift.io/ocp-console-migration-target": "lokistack-dev" }}}' \ -n openshift-logging Example output clusterlogging.logging.openshift.io/instance patched Verification Verify that the annotation was added successfully, by running the following command and observing the output: USD oc get clusterlogging instance \ -o=jsonpath='{.metadata.annotations.logging\.openshift\.io/ocp-console-migration-target}' \ -n openshift-logging Example output "lokistack-dev" The logging Console Plugin pod is now deployed. You can view logging data by navigating to the OpenShift Container Platform web console and viewing the Observe Logs page. 9.3. Viewing cluster dashboards The Logging/Elasticsearch Nodes and Openshift Logging dashboards in the OpenShift Container Platform web console contain in-depth details about your Elasticsearch instance and the individual Elasticsearch nodes that you can use to prevent and diagnose problems. The OpenShift Logging dashboard contains charts that show details about your Elasticsearch instance at a cluster level, including cluster resources, garbage collection, shards in the cluster, and Fluentd statistics. The Logging/Elasticsearch Nodes dashboard contains charts that show details about your Elasticsearch instance, many at node level, including details on indexing, shards, resources, and so forth. 9.3.1. Accessing the Elasticsearch and OpenShift Logging dashboards You can view the Logging/Elasticsearch Nodes and OpenShift Logging dashboards in the OpenShift Container Platform web console. Procedure To launch the dashboards: In the OpenShift Container Platform web console, click Observe Dashboards . On the Dashboards page, select Logging/Elasticsearch Nodes or OpenShift Logging from the Dashboard menu. For the Logging/Elasticsearch Nodes dashboard, you can select the Elasticsearch node you want to view and set the data resolution. The appropriate dashboard is displayed, showing multiple charts of data. Optional: Select a different time range to display or refresh rate for the data from the Time Range and Refresh Interval menus. For information on the dashboard charts, see About the OpenShift Logging dashboard and About the Logging/Elastisearch Nodes dashboard . 9.3.2. About the OpenShift Logging dashboard The OpenShift Logging dashboard contains charts that show details about your Elasticsearch instance at a cluster-level that you can use to diagnose and anticipate problems. Table 9.1. OpenShift Logging charts Metric Description Elastic Cluster Status The current Elasticsearch status: ONLINE - Indicates that the Elasticsearch instance is online. OFFLINE - Indicates that the Elasticsearch instance is offline. Elastic Nodes The total number of Elasticsearch nodes in the Elasticsearch instance. Elastic Shards The total number of Elasticsearch shards in the Elasticsearch instance. Elastic Documents The total number of Elasticsearch documents in the Elasticsearch instance. Total Index Size on Disk The total disk space that is being used for the Elasticsearch indices. Elastic Pending Tasks The total number of Elasticsearch changes that have not been completed, such as index creation, index mapping, shard allocation, or shard failure. Elastic JVM GC time The amount of time that the JVM spent executing Elasticsearch garbage collection operations in the cluster. Elastic JVM GC Rate The total number of times that JVM executed garbage activities per second. Elastic Query/Fetch Latency Sum Query latency: The average time each Elasticsearch search query takes to execute. Fetch latency: The average time each Elasticsearch search query spends fetching data. Fetch latency typically takes less time than query latency. If fetch latency is consistently increasing, it might indicate slow disks, data enrichment, or large requests with too many results. Elastic Query Rate The total queries executed against the Elasticsearch instance per second for each Elasticsearch node. CPU The amount of CPU used by Elasticsearch, Fluentd, and Kibana, shown for each component. Elastic JVM Heap Used The amount of JVM memory used. In a healthy cluster, the graph shows regular drops as memory is freed by JVM garbage collection. Elasticsearch Disk Usage The total disk space used by the Elasticsearch instance for each Elasticsearch node. File Descriptors In Use The total number of file descriptors used by Elasticsearch, Fluentd, and Kibana. FluentD emit count The total number of Fluentd messages per second for the Fluentd default output, and the retry count for the default output. FluentD Buffer Usage The percent of the Fluentd buffer that is being used for chunks. A full buffer might indicate that Fluentd is not able to process the number of logs received. Elastic rx bytes The total number of bytes that Elasticsearch has received from FluentD, the Elasticsearch nodes, and other sources. Elastic Index Failure Rate The total number of times per second that an Elasticsearch index fails. A high rate might indicate an issue with indexing. FluentD Output Error Rate The total number of times per second that FluentD is not able to output logs. 9.3.3. Charts on the Logging/Elasticsearch nodes dashboard The Logging/Elasticsearch Nodes dashboard contains charts that show details about your Elasticsearch instance, many at node-level, for further diagnostics. Elasticsearch status The Logging/Elasticsearch Nodes dashboard contains the following charts about the status of your Elasticsearch instance. Table 9.2. Elasticsearch status fields Metric Description Cluster status The cluster health status during the selected time period, using the Elasticsearch green, yellow, and red statuses: 0 - Indicates that the Elasticsearch instance is in green status, which means that all shards are allocated. 1 - Indicates that the Elasticsearch instance is in yellow status, which means that replica shards for at least one shard are not allocated. 2 - Indicates that the Elasticsearch instance is in red status, which means that at least one primary shard and its replicas are not allocated. Cluster nodes The total number of Elasticsearch nodes in the cluster. Cluster data nodes The number of Elasticsearch data nodes in the cluster. Cluster pending tasks The number of cluster state changes that are not finished and are waiting in a cluster queue, for example, index creation, index deletion, or shard allocation. A growing trend indicates that the cluster is not able to keep up with changes. Elasticsearch cluster index shard status Each Elasticsearch index is a logical group of one or more shards, which are basic units of persisted data. There are two types of index shards: primary shards, and replica shards. When a document is indexed into an index, it is stored in one of its primary shards and copied into every replica of that shard. The number of primary shards is specified when the index is created, and the number cannot change during index lifetime. You can change the number of replica shards at any time. The index shard can be in several states depending on its lifecycle phase or events occurring in the cluster. When the shard is able to perform search and indexing requests, the shard is active. If the shard cannot perform these requests, the shard is non-active. A shard might be non-active if the shard is initializing, reallocating, unassigned, and so forth. Index shards consist of a number of smaller internal blocks, called index segments, which are physical representations of the data. An index segment is a relatively small, immutable Lucene index that is created when Lucene commits newly-indexed data. Lucene, a search library used by Elasticsearch, merges index segments into larger segments in the background to keep the total number of segments low. If the process of merging segments is slower than the speed at which new segments are created, it could indicate a problem. When Lucene performs data operations, such as a search operation, Lucene performs the operation against the index segments in the relevant index. For that purpose, each segment contains specific data structures that are loaded in the memory and mapped. Index mapping can have a significant impact on the memory used by segment data structures. The Logging/Elasticsearch Nodes dashboard contains the following charts about the Elasticsearch index shards. Table 9.3. Elasticsearch cluster shard status charts Metric Description Cluster active shards The number of active primary shards and the total number of shards, including replicas, in the cluster. If the number of shards grows higher, the cluster performance can start degrading. Cluster initializing shards The number of non-active shards in the cluster. A non-active shard is one that is initializing, being reallocated to a different node, or is unassigned. A cluster typically has non-active shards for short periods. A growing number of non-active shards over longer periods could indicate a problem. Cluster relocating shards The number of shards that Elasticsearch is relocating to a new node. Elasticsearch relocates nodes for multiple reasons, such as high memory use on a node or after a new node is added to the cluster. Cluster unassigned shards The number of unassigned shards. Elasticsearch shards might be unassigned for reasons such as a new index being added or the failure of a node. Elasticsearch node metrics Each Elasticsearch node has a finite amount of resources that can be used to process tasks. When all the resources are being used and Elasticsearch attempts to perform a new task, Elasticsearch puts the tasks into a queue until some resources become available. The Logging/Elasticsearch Nodes dashboard contains the following charts about resource usage for a selected node and the number of tasks waiting in the Elasticsearch queue. Table 9.4. Elasticsearch node metric charts Metric Description ThreadPool tasks The number of waiting tasks in individual queues, shown by task type. A long-term accumulation of tasks in any queue could indicate node resource shortages or some other problem. CPU usage The amount of CPU being used by the selected Elasticsearch node as a percentage of the total CPU allocated to the host container. Memory usage The amount of memory being used by the selected Elasticsearch node. Disk usage The total disk space being used for index data and metadata on the selected Elasticsearch node. Documents indexing rate The rate that documents are indexed on the selected Elasticsearch node. Indexing latency The time taken to index the documents on the selected Elasticsearch node. Indexing latency can be affected by many factors, such as JVM Heap memory and overall load. A growing latency indicates a resource capacity shortage in the instance. Search rate The number of search requests run on the selected Elasticsearch node. Search latency The time taken to complete search requests on the selected Elasticsearch node. Search latency can be affected by many factors. A growing latency indicates a resource capacity shortage in the instance. Documents count (with replicas) The number of Elasticsearch documents stored on the selected Elasticsearch node, including documents stored in both the primary shards and replica shards that are allocated on the node. Documents deleting rate The number of Elasticsearch documents being deleted from any of the index shards that are allocated to the selected Elasticsearch node. Documents merging rate The number of Elasticsearch documents being merged in any of index shards that are allocated to the selected Elasticsearch node. Elasticsearch node fielddata Fielddata is an Elasticsearch data structure that holds lists of terms in an index and is kept in the JVM Heap. Because fielddata building is an expensive operation, Elasticsearch caches the fielddata structures. Elasticsearch can evict a fielddata cache when the underlying index segment is deleted or merged, or if there is not enough JVM HEAP memory for all the fielddata caches. The Logging/Elasticsearch Nodes dashboard contains the following charts about Elasticsearch fielddata. Table 9.5. Elasticsearch node fielddata charts Metric Description Fielddata memory size The amount of JVM Heap used for the fielddata cache on the selected Elasticsearch node. Fielddata evictions The number of fielddata structures that were deleted from the selected Elasticsearch node. Elasticsearch node query cache If the data stored in the index does not change, search query results are cached in a node-level query cache for reuse by Elasticsearch. The Logging/Elasticsearch Nodes dashboard contains the following charts about the Elasticsearch node query cache. Table 9.6. Elasticsearch node query charts Metric Description Query cache size The total amount of memory used for the query cache for all the shards allocated to the selected Elasticsearch node. Query cache evictions The number of query cache evictions on the selected Elasticsearch node. Query cache hits The number of query cache hits on the selected Elasticsearch node. Query cache misses The number of query cache misses on the selected Elasticsearch node. Elasticsearch index throttling When indexing documents, Elasticsearch stores the documents in index segments, which are physical representations of the data. At the same time, Elasticsearch periodically merges smaller segments into a larger segment as a way to optimize resource use. If the indexing is faster then the ability to merge segments, the merge process does not complete quickly enough, which can lead to issues with searches and performance. To prevent this situation, Elasticsearch throttles indexing, typically by reducing the number of threads allocated to indexing down to a single thread. The Logging/Elasticsearch Nodes dashboard contains the following charts about Elasticsearch index throttling. Table 9.7. Index throttling charts Metric Description Indexing throttling The amount of time that Elasticsearch has been throttling the indexing operations on the selected Elasticsearch node. Merging throttling The amount of time that Elasticsearch has been throttling the segment merge operations on the selected Elasticsearch node. Node JVM Heap statistics The Logging/Elasticsearch Nodes dashboard contains the following charts about JVM Heap operations. Table 9.8. JVM Heap statistic charts Metric Description Heap used The amount of the total allocated JVM Heap space that is used on the selected Elasticsearch node. GC count The number of garbage collection operations that have been run on the selected Elasticsearch node, by old and young garbage collection. GC time The amount of time that the JVM spent running garbage collection operations on the selected Elasticsearch node, by old and young garbage collection. 9.4. Log visualization with Kibana If you are using the ElasticSearch log store, you can use the Kibana console to visualize collected log data. Using Kibana, you can do the following with your data: Search and browse the data using the Discover tab. Chart and map the data using the Visualize tab. Create and view custom dashboards using the Dashboard tab. Use and configuration of the Kibana interface is beyond the scope of this documentation. For more information about using the interface, see the Kibana documentation . Note The audit logs are not stored in the internal OpenShift Container Platform Elasticsearch instance by default. To view the audit logs in Kibana, you must use the Log Forwarding API to configure a pipeline that uses the default output for audit logs. 9.4.1. Defining Kibana index patterns An index pattern defines the Elasticsearch indices that you want to visualize. To explore and visualize data in Kibana, you must create an index pattern. Prerequisites A user must have the cluster-admin role, the cluster-reader role, or both roles to view the infra and audit indices in Kibana. The default kubeadmin user has proper permissions to view these indices. If you can view the pods and logs in the default , kube- and openshift- projects, you should be able to access these indices. You can use the following command to check if the current user has appropriate permissions: USD oc auth can-i get pods --subresource log -n <project> Example output yes Note The audit logs are not stored in the internal OpenShift Container Platform Elasticsearch instance by default. To view the audit logs in Kibana, you must use the Log Forwarding API to configure a pipeline that uses the default output for audit logs. Elasticsearch documents must be indexed before you can create index patterns. This is done automatically, but it might take a few minutes in a new or updated cluster. Procedure To define index patterns and create visualizations in Kibana: In the OpenShift Container Platform console, click the Application Launcher and select Logging . Create your Kibana index patterns by clicking Management Index Patterns Create index pattern : Each user must manually create index patterns when logging into Kibana the first time to see logs for their projects. Users must create an index pattern named app and use the @timestamp time field to view their container logs. Each admin user must create index patterns when logged into Kibana the first time for the app , infra , and audit indices using the @timestamp time field. Create Kibana Visualizations from the new index patterns. 9.4.2. Viewing cluster logs in Kibana You view cluster logs in the Kibana web console. The methods for viewing and visualizing your data in Kibana that are beyond the scope of this documentation. For more information, refer to the Kibana documentation . Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Kibana index patterns must exist. A user must have the cluster-admin role, the cluster-reader role, or both roles to view the infra and audit indices in Kibana. The default kubeadmin user has proper permissions to view these indices. If you can view the pods and logs in the default , kube- and openshift- projects, you should be able to access these indices. You can use the following command to check if the current user has appropriate permissions: USD oc auth can-i get pods --subresource log -n <project> Example output yes Note The audit logs are not stored in the internal OpenShift Container Platform Elasticsearch instance by default. To view the audit logs in Kibana, you must use the Log Forwarding API to configure a pipeline that uses the default output for audit logs. Procedure To view logs in Kibana: In the OpenShift Container Platform console, click the Application Launcher and select Logging . Log in using the same credentials you use to log in to the OpenShift Container Platform console. The Kibana interface launches. In Kibana, click Discover . Select the index pattern you created from the drop-down menu in the top-left corner: app , audit , or infra . The log data displays as time-stamped documents. Expand one of the time-stamped documents. Click the JSON tab to display the log entry for that document. Example 9.1. Sample infrastructure log entry in Kibana { "_index": "infra-000001", "_type": "_doc", "_id": "YmJmYTBlNDkZTRmLTliMGQtMjE3NmFiOGUyOWM3", "_version": 1, "_score": null, "_source": { "docker": { "container_id": "f85fa55bbef7bb783f041066be1e7c267a6b88c4603dfce213e32c1" }, "kubernetes": { "container_name": "registry-server", "namespace_name": "openshift-marketplace", "pod_name": "redhat-marketplace-n64gc", "container_image": "registry.redhat.io/redhat/redhat-marketplace-index:v4.7", "container_image_id": "registry.redhat.io/redhat/redhat-marketplace-index@sha256:65fc0c45aabb95809e376feb065771ecda9e5e59cc8b3024c4545c168f", "pod_id": "8f594ea2-c866-4b5c-a1c8-a50756704b2a", "host": "ip-10-0-182-28.us-east-2.compute.internal", "master_url": "https://kubernetes.default.svc", "namespace_id": "3abab127-7669-4eb3-b9ef-44c04ad68d38", "namespace_labels": { "openshift_io/cluster-monitoring": "true" }, "flat_labels": [ "catalogsource_operators_coreos_com/update=redhat-marketplace" ] }, "message": "time=\"2020-09-23T20:47:03Z\" level=info msg=\"serving registry\" database=/database/index.db port=50051", "level": "unknown", "hostname": "ip-10-0-182-28.internal", "pipeline_metadata": { "collector": { "ipaddr4": "10.0.182.28", "inputname": "fluent-plugin-systemd", "name": "fluentd", "received_at": "2020-09-23T20:47:15.007583+00:00", "version": "1.7.4 1.6.0" } }, "@timestamp": "2020-09-23T20:47:03.422465+00:00", "viaq_msg_id": "YmJmYTBlNDktMDMGQtMjE3NmFiOGUyOWM3", "openshift": { "labels": { "logging": "infra" } } }, "fields": { "@timestamp": [ "2020-09-23T20:47:03.422Z" ], "pipeline_metadata.collector.received_at": [ "2020-09-23T20:47:15.007Z" ] }, "sort": [ 1600894023422 ] } 9.4.3. Configuring Kibana You can configure using the Kibana console by modifying the ClusterLogging custom resource (CR). 9.4.3.1. Configuring CPU and memory limits The logging components allow for adjustments to both the CPU and memory limits. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging ... spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: "gp2" size: "200G" redundancyPolicy: "SingleRedundancy" visualization: type: "kibana" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi type: fluentd 1 Specify the CPU and memory limits and requests for the log store as needed. For Elasticsearch, you must adjust both the request value and the limit value. 2 3 Specify the CPU and memory limits and requests for the log visualizer as needed. 4 Specify the CPU and memory limits and requests for the log collector as needed. 9.4.3.2. Scaling redundancy for the log visualizer nodes You can scale the pod that hosts the log visualizer for redundancy. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance USD oc edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging .... spec: visualization: type: "kibana" kibana: replicas: 1 1 1 Specify the number of Kibana nodes.
[ "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: visualization: type: <visualizer_type> 1 kibana: 2 resources: {} nodeSelector: {} proxy: {} replicas: {} tolerations: {} ocpConsole: 3 logsLimit: {} timeout: {}", "oc apply -f <filename>.yaml", "oc logs -f <pod_name> -c <container_name>", "oc logs ruby-58cd97df55-mww7r", "oc logs -f ruby-57f7f4855b-znl92 -c ruby", "oc logs <object_type>/<resource_name> 1", "oc logs deployment/ruby", "oc get consoles.operator.openshift.io cluster -o yaml |grep logging-view-plugin || oc patch consoles.operator.openshift.io cluster --type=merge --patch '{ \"spec\": { \"plugins\": [\"logging-view-plugin\"]}}'", "oc patch clusterlogging instance --type=merge --patch '{ \"metadata\": { \"annotations\": { \"logging.openshift.io/ocp-console-migration-target\": \"lokistack-dev\" }}}' -n openshift-logging", "clusterlogging.logging.openshift.io/instance patched", "oc get clusterlogging instance -o=jsonpath='{.metadata.annotations.logging\\.openshift\\.io/ocp-console-migration-target}' -n openshift-logging", "\"lokistack-dev\"", "oc auth can-i get pods --subresource log -n <project>", "yes", "oc auth can-i get pods --subresource log -n <project>", "yes", "{ \"_index\": \"infra-000001\", \"_type\": \"_doc\", \"_id\": \"YmJmYTBlNDkZTRmLTliMGQtMjE3NmFiOGUyOWM3\", \"_version\": 1, \"_score\": null, \"_source\": { \"docker\": { \"container_id\": \"f85fa55bbef7bb783f041066be1e7c267a6b88c4603dfce213e32c1\" }, \"kubernetes\": { \"container_name\": \"registry-server\", \"namespace_name\": \"openshift-marketplace\", \"pod_name\": \"redhat-marketplace-n64gc\", \"container_image\": \"registry.redhat.io/redhat/redhat-marketplace-index:v4.7\", \"container_image_id\": \"registry.redhat.io/redhat/redhat-marketplace-index@sha256:65fc0c45aabb95809e376feb065771ecda9e5e59cc8b3024c4545c168f\", \"pod_id\": \"8f594ea2-c866-4b5c-a1c8-a50756704b2a\", \"host\": \"ip-10-0-182-28.us-east-2.compute.internal\", \"master_url\": \"https://kubernetes.default.svc\", \"namespace_id\": \"3abab127-7669-4eb3-b9ef-44c04ad68d38\", \"namespace_labels\": { \"openshift_io/cluster-monitoring\": \"true\" }, \"flat_labels\": [ \"catalogsource_operators_coreos_com/update=redhat-marketplace\" ] }, \"message\": \"time=\\\"2020-09-23T20:47:03Z\\\" level=info msg=\\\"serving registry\\\" database=/database/index.db port=50051\", \"level\": \"unknown\", \"hostname\": \"ip-10-0-182-28.internal\", \"pipeline_metadata\": { \"collector\": { \"ipaddr4\": \"10.0.182.28\", \"inputname\": \"fluent-plugin-systemd\", \"name\": \"fluentd\", \"received_at\": \"2020-09-23T20:47:15.007583+00:00\", \"version\": \"1.7.4 1.6.0\" } }, \"@timestamp\": \"2020-09-23T20:47:03.422465+00:00\", \"viaq_msg_id\": \"YmJmYTBlNDktMDMGQtMjE3NmFiOGUyOWM3\", \"openshift\": { \"labels\": { \"logging\": \"infra\" } } }, \"fields\": { \"@timestamp\": [ \"2020-09-23T20:47:03.422Z\" ], \"pipeline_metadata.collector.received_at\": [ \"2020-09-23T20:47:15.007Z\" ] }, \"sort\": [ 1600894023422 ] }", "oc -n openshift-logging edit ClusterLogging instance", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi type: fluentd", "oc -n openshift-logging edit ClusterLogging instance", "oc edit ClusterLogging instance apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging . spec: visualization: type: \"kibana\" kibana: replicas: 1 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/logging/visualizing-logs
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.382/providing-direct-documentation-feedback_openjdk
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/proc_providing-feedback-on-red-hat-documentation_configuring-and-managing-idm
9.4. Updating a Configuration
9.4. Updating a Configuration Updating the cluster configuration consists of editing the cluster configuration file ( /etc/cluster/cluster.conf ) and propagating it to each node in the cluster. You can update the configuration using either of the following procedures: Section 9.4.1, "Updating a Configuration Using cman_tool version -r " Section 9.4.2, "Updating a Configuration Using scp " 9.4.1. Updating a Configuration Using cman_tool version -r To update the configuration using the cman_tool version -r command, perform the following steps: At any node in the cluster, edit the /etc/cluster/cluster.conf file. Update the config_version attribute by incrementing its value (for example, changing from config_version="2" to config_version="3" ). Save /etc/cluster/cluster.conf . Run the cman_tool version -r command to propagate the configuration to the rest of the cluster nodes. It is necessary that ricci be running in each cluster node to be able to propagate updated cluster configuration information. Verify that the updated cluster.conf configuration file has been propagated. If not, use the scp command to propagate it to /etc/cluster/ in each cluster node. You may skip this step (restarting cluster software) if you have made only the following configuration changes: Deleting a node from the cluster configuration- except where the node count changes from greater than two nodes to two nodes. For information about deleting a node from a cluster and transitioning from greater than two nodes to two nodes, see Section 9.2, "Deleting or Adding a Node" . Adding a node to the cluster configuration- except where the node count changes from two nodes to greater than two nodes. For information about adding a node to a cluster and transitioning from two nodes to greater than two nodes, see Section 9.2.2, "Adding a Node to a Cluster" . Changes to how daemons log information. HA service/VM maintenance (adding, editing, or deleting). Resource maintenance (adding, editing, or deleting). Failover domain maintenance (adding, editing, or deleting). Otherwise, you must restart the cluster software as follows: At each node, stop the cluster software according to Section 9.1.2, "Stopping Cluster Software" . At each node, start the cluster software according to Section 9.1.1, "Starting Cluster Software" . Stopping and starting the cluster software ensures that any configuration changes that are checked only at startup time are included in the running configuration. At any cluster node, run cman_tool nodes to verify that the nodes are functioning as members in the cluster (signified as "M" in the status column, "Sts"). For example: At any node, using the clustat utility, verify that the HA services are running as expected. In addition, clustat displays status of the cluster nodes. For example: If the cluster is running as expected, you are done updating the configuration.
[ "cman_tool nodes Node Sts Inc Joined Name 1 M 548 2010-09-28 10:52:21 node-01.example.com 2 M 548 2010-09-28 10:52:21 node-02.example.com 3 M 544 2010-09-28 10:52:21 node-03.example.com", "clustat Cluster Status for mycluster @ Wed Nov 17 05:40:00 2010 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ node-03.example.com 3 Online, rgmanager node-02.example.com 2 Online, rgmanager node-01.example.com 1 Online, Local, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:example_apache node-01.example.com started service:example_apache2 (none) disabled" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-admin-updating-config-ca
Chapter 136. KafkaConnectorStatus schema reference
Chapter 136. KafkaConnectorStatus schema reference Used in: KafkaConnector Property Property type Description conditions Condition array List of status conditions. observedGeneration integer The generation of the CRD that was last reconciled by the operator. autoRestart AutoRestartStatus The auto restart status. connectorStatus map The connector status, as reported by the Kafka Connect REST API. tasksMax integer The maximum number of tasks for the Kafka Connector. topics string array The list of topics used by the Kafka Connector.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkaconnectorstatus-reference
10.6. Transport Security Authentication Modes
10.6. Transport Security Authentication Modes The following authentication modes are available: anonymous No certificates are exchanged. Settings are not needed for the keystore and truststore properties. The client must have org.teiid.ssl.allowAnon set to true (the default) to connect to an anonymous server. Communications are encrypted using the TLS_DH_anon_WITH_AES_128_CBC_SHA SSL cipher suite. This is suitable for most secure intranets. 1-way Athenticates the server to the client. The server presents a certificate which is signed by the private key stored in the server's keystore. The server's corresponding public key must be in the client's truststore. 2-way Mutual client and server authentication. The server presents a certificate which is signed by the private key stored in the server's keystore. The server's corresponding public key must be in the client's truststore. Additionally, the client presents a certificate signed by its private key stored in the client's keystore. The client's corresponsing public key must be in the server's truststore. Note You can use keytool to generate encryption keys; however, you should first consider your local requirements for managing public key cryptography.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/ssl_authentication_modes1
Chapter 3. AlertmanagerConfig [monitoring.coreos.com/v1beta1]
Chapter 3. AlertmanagerConfig [monitoring.coreos.com/v1beta1] Description The AlertmanagerConfig custom resource definition (CRD) defines how Alertmanager objects process Prometheus alerts. It allows to specify alert grouping and routing, notification receivers and inhibition rules. Alertmanager objects select AlertmanagerConfig objects using label and namespace selectors. Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object AlertmanagerConfigSpec is a specification of the desired behavior of the Alertmanager configuration. By definition, the Alertmanager configuration only applies to alerts for which the namespace label is equal to the namespace of the AlertmanagerConfig resource. 3.1.1. .spec Description AlertmanagerConfigSpec is a specification of the desired behavior of the Alertmanager configuration. By definition, the Alertmanager configuration only applies to alerts for which the namespace label is equal to the namespace of the AlertmanagerConfig resource. Type object Property Type Description inhibitRules array List of inhibition rules. The rules will only apply to alerts matching the resource's namespace. inhibitRules[] object InhibitRule defines an inhibition rule that allows to mute alerts when other alerts are already firing. See https://prometheus.io/docs/alerting/latest/configuration/#inhibit_rule receivers array List of receivers. receivers[] object Receiver defines one or more notification integrations. route object The Alertmanager route definition for alerts matching the resource's namespace. If present, it will be added to the generated Alertmanager configuration as a first-level route. timeIntervals array List of TimeInterval specifying when the routes should be muted or active. timeIntervals[] object TimeInterval specifies the periods in time when notifications will be muted or active. 3.1.2. .spec.inhibitRules Description List of inhibition rules. The rules will only apply to alerts matching the resource's namespace. Type array 3.1.3. .spec.inhibitRules[] Description InhibitRule defines an inhibition rule that allows to mute alerts when other alerts are already firing. See https://prometheus.io/docs/alerting/latest/configuration/#inhibit_rule Type object Property Type Description equal array (string) Labels that must have an equal value in the source and target alert for the inhibition to take effect. sourceMatch array Matchers for which one or more alerts have to exist for the inhibition to take effect. The operator enforces that the alert matches the resource's namespace. sourceMatch[] object Matcher defines how to match on alert's labels. targetMatch array Matchers that have to be fulfilled in the alerts to be muted. The operator enforces that the alert matches the resource's namespace. targetMatch[] object Matcher defines how to match on alert's labels. 3.1.4. .spec.inhibitRules[].sourceMatch Description Matchers for which one or more alerts have to exist for the inhibition to take effect. The operator enforces that the alert matches the resource's namespace. Type array 3.1.5. .spec.inhibitRules[].sourceMatch[] Description Matcher defines how to match on alert's labels. Type object Required name Property Type Description matchType string Match operator, one of = (equal to), != (not equal to), =~ (regex match) or !~ (not regex match). Negative operators ( != and !~ ) require Alertmanager >= v0.22.0. name string Label to match. value string Label value to match. 3.1.6. .spec.inhibitRules[].targetMatch Description Matchers that have to be fulfilled in the alerts to be muted. The operator enforces that the alert matches the resource's namespace. Type array 3.1.7. .spec.inhibitRules[].targetMatch[] Description Matcher defines how to match on alert's labels. Type object Required name Property Type Description matchType string Match operator, one of = (equal to), != (not equal to), =~ (regex match) or !~ (not regex match). Negative operators ( != and !~ ) require Alertmanager >= v0.22.0. name string Label to match. value string Label value to match. 3.1.8. .spec.receivers Description List of receivers. Type array 3.1.9. .spec.receivers[] Description Receiver defines one or more notification integrations. Type object Required name Property Type Description discordConfigs array List of Slack configurations. discordConfigs[] object DiscordConfig configures notifications via Discord. See https://prometheus.io/docs/alerting/latest/configuration/#discord_config emailConfigs array List of Email configurations. emailConfigs[] object EmailConfig configures notifications via Email. msteamsConfigs array List of MSTeams configurations. It requires Alertmanager >= 0.26.0. msteamsConfigs[] object MSTeamsConfig configures notifications via Microsoft Teams. It requires Alertmanager >= 0.26.0. name string Name of the receiver. Must be unique across all items from the list. opsgenieConfigs array List of OpsGenie configurations. opsgenieConfigs[] object OpsGenieConfig configures notifications via OpsGenie. See https://prometheus.io/docs/alerting/latest/configuration/#opsgenie_config pagerdutyConfigs array List of PagerDuty configurations. pagerdutyConfigs[] object PagerDutyConfig configures notifications via PagerDuty. See https://prometheus.io/docs/alerting/latest/configuration/#pagerduty_config pushoverConfigs array List of Pushover configurations. pushoverConfigs[] object PushoverConfig configures notifications via Pushover. See https://prometheus.io/docs/alerting/latest/configuration/#pushover_config slackConfigs array List of Slack configurations. slackConfigs[] object SlackConfig configures notifications via Slack. See https://prometheus.io/docs/alerting/latest/configuration/#slack_config snsConfigs array List of SNS configurations snsConfigs[] object SNSConfig configures notifications via AWS SNS. See https://prometheus.io/docs/alerting/latest/configuration/#sns_configs telegramConfigs array List of Telegram configurations. telegramConfigs[] object TelegramConfig configures notifications via Telegram. See https://prometheus.io/docs/alerting/latest/configuration/#telegram_config victoropsConfigs array List of VictorOps configurations. victoropsConfigs[] object VictorOpsConfig configures notifications via VictorOps. See https://prometheus.io/docs/alerting/latest/configuration/#victorops_config webexConfigs array List of Webex configurations. webexConfigs[] object WebexConfig configures notification via Cisco Webex See https://prometheus.io/docs/alerting/latest/configuration/#webex_config webhookConfigs array List of webhook configurations. webhookConfigs[] object WebhookConfig configures notifications via a generic receiver supporting the webhook payload. See https://prometheus.io/docs/alerting/latest/configuration/#webhook_config wechatConfigs array List of WeChat configurations. wechatConfigs[] object WeChatConfig configures notifications via WeChat. See https://prometheus.io/docs/alerting/latest/configuration/#wechat_config 3.1.10. .spec.receivers[].discordConfigs Description List of Slack configurations. Type array 3.1.11. .spec.receivers[].discordConfigs[] Description DiscordConfig configures notifications via Discord. See https://prometheus.io/docs/alerting/latest/configuration/#discord_config Type object Required apiURL Property Type Description apiURL object The secret's key that contains the Discord webhook URL. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. httpConfig object HTTP client configuration. message string The template of the message's body. sendResolved boolean Whether or not to notify about resolved alerts. title string The template of the message's title. 3.1.12. .spec.receivers[].discordConfigs[].apiURL Description The secret's key that contains the Discord webhook URL. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.13. .spec.receivers[].discordConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. tlsConfig object TLS configuration for the client. 3.1.14. .spec.receivers[].discordConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.15. .spec.receivers[].discordConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.16. .spec.receivers[].discordConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.17. .spec.receivers[].discordConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.18. .spec.receivers[].discordConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.19. .spec.receivers[].discordConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.20. .spec.receivers[].discordConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tlsConfig object TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.21. .spec.receivers[].discordConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.22. .spec.receivers[].discordConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.23. .spec.receivers[].discordConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.24. .spec.receivers[].discordConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.25. .spec.receivers[].discordConfigs[].httpConfig.oauth2.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 3.1.26. .spec.receivers[].discordConfigs[].httpConfig.oauth2.proxyConnectHeader{} Description Type array 3.1.27. .spec.receivers[].discordConfigs[].httpConfig.oauth2.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.28. .spec.receivers[].discordConfigs[].httpConfig.oauth2.tlsConfig Description TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 3.1.29. .spec.receivers[].discordConfigs[].httpConfig.oauth2.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.30. .spec.receivers[].discordConfigs[].httpConfig.oauth2.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.31. .spec.receivers[].discordConfigs[].httpConfig.oauth2.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.32. .spec.receivers[].discordConfigs[].httpConfig.oauth2.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.33. .spec.receivers[].discordConfigs[].httpConfig.oauth2.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.34. .spec.receivers[].discordConfigs[].httpConfig.oauth2.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.35. .spec.receivers[].discordConfigs[].httpConfig.oauth2.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.36. .spec.receivers[].discordConfigs[].httpConfig.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 3.1.37. .spec.receivers[].discordConfigs[].httpConfig.proxyConnectHeader{} Description Type array 3.1.38. .spec.receivers[].discordConfigs[].httpConfig.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.39. .spec.receivers[].discordConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 3.1.40. .spec.receivers[].discordConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.41. .spec.receivers[].discordConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.42. .spec.receivers[].discordConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.43. .spec.receivers[].discordConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.44. .spec.receivers[].discordConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.45. .spec.receivers[].discordConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.46. .spec.receivers[].discordConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.47. .spec.receivers[].emailConfigs Description List of Email configurations. Type array 3.1.48. .spec.receivers[].emailConfigs[] Description EmailConfig configures notifications via Email. Type object Property Type Description authIdentity string The identity to use for authentication. authPassword object The secret's key that contains the password to use for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. authSecret object The secret's key that contains the CRAM-MD5 secret. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. authUsername string The username to use for authentication. from string The sender address. headers array Further headers email header key/value pairs. Overrides any headers previously set by the notification implementation. headers[] object KeyValue defines a (key, value) tuple. hello string The hostname to identify to the SMTP server. html string The HTML body of the email notification. requireTLS boolean The SMTP TLS requirement. Note that Go does not support unencrypted connections to remote SMTP endpoints. sendResolved boolean Whether or not to notify about resolved alerts. smarthost string The SMTP host and port through which emails are sent. E.g. example.com:25 text string The text body of the email notification. tlsConfig object TLS configuration to string The email address to send notifications to. 3.1.49. .spec.receivers[].emailConfigs[].authPassword Description The secret's key that contains the password to use for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.50. .spec.receivers[].emailConfigs[].authSecret Description The secret's key that contains the CRAM-MD5 secret. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.51. .spec.receivers[].emailConfigs[].headers Description Further headers email header key/value pairs. Overrides any headers previously set by the notification implementation. Type array 3.1.52. .spec.receivers[].emailConfigs[].headers[] Description KeyValue defines a (key, value) tuple. Type object Required key value Property Type Description key string Key of the tuple. value string Value of the tuple. 3.1.53. .spec.receivers[].emailConfigs[].tlsConfig Description TLS configuration Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 3.1.54. .spec.receivers[].emailConfigs[].tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.55. .spec.receivers[].emailConfigs[].tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.56. .spec.receivers[].emailConfigs[].tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.57. .spec.receivers[].emailConfigs[].tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.58. .spec.receivers[].emailConfigs[].tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.59. .spec.receivers[].emailConfigs[].tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.60. .spec.receivers[].emailConfigs[].tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.61. .spec.receivers[].msteamsConfigs Description List of MSTeams configurations. It requires Alertmanager >= 0.26.0. Type array 3.1.62. .spec.receivers[].msteamsConfigs[] Description MSTeamsConfig configures notifications via Microsoft Teams. It requires Alertmanager >= 0.26.0. Type object Required webhookUrl Property Type Description httpConfig object HTTP client configuration. sendResolved boolean Whether to notify about resolved alerts. summary string Message summary template. It requires Alertmanager >= 0.27.0. text string Message body template. title string Message title template. webhookUrl object MSTeams webhook URL. 3.1.63. .spec.receivers[].msteamsConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. tlsConfig object TLS configuration for the client. 3.1.64. .spec.receivers[].msteamsConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.65. .spec.receivers[].msteamsConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.66. .spec.receivers[].msteamsConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.67. .spec.receivers[].msteamsConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.68. .spec.receivers[].msteamsConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.69. .spec.receivers[].msteamsConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.70. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tlsConfig object TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.71. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.72. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.73. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.74. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.75. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 3.1.76. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2.proxyConnectHeader{} Description Type array 3.1.77. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.78. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2.tlsConfig Description TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 3.1.79. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.80. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.81. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.82. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.83. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.84. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.85. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.86. .spec.receivers[].msteamsConfigs[].httpConfig.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 3.1.87. .spec.receivers[].msteamsConfigs[].httpConfig.proxyConnectHeader{} Description Type array 3.1.88. .spec.receivers[].msteamsConfigs[].httpConfig.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.89. .spec.receivers[].msteamsConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 3.1.90. .spec.receivers[].msteamsConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.91. .spec.receivers[].msteamsConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.92. .spec.receivers[].msteamsConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.93. .spec.receivers[].msteamsConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.94. .spec.receivers[].msteamsConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.95. .spec.receivers[].msteamsConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.96. .spec.receivers[].msteamsConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.97. .spec.receivers[].msteamsConfigs[].webhookUrl Description MSTeams webhook URL. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.98. .spec.receivers[].opsgenieConfigs Description List of OpsGenie configurations. Type array 3.1.99. .spec.receivers[].opsgenieConfigs[] Description OpsGenieConfig configures notifications via OpsGenie. See https://prometheus.io/docs/alerting/latest/configuration/#opsgenie_config Type object Property Type Description actions string Comma separated list of actions that will be available for the alert. apiKey object The secret's key that contains the OpsGenie API key. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. apiURL string The URL to send OpsGenie API requests to. description string Description of the incident. details array A set of arbitrary key/value pairs that provide further detail about the incident. details[] object KeyValue defines a (key, value) tuple. entity string Optional field that can be used to specify which domain alert is related to. httpConfig object HTTP client configuration. message string Alert text limited to 130 characters. note string Additional alert note. priority string Priority level of alert. Possible values are P1, P2, P3, P4, and P5. responders array List of responders responsible for notifications. responders[] object OpsGenieConfigResponder defines a responder to an incident. One of id , name or username has to be defined. sendResolved boolean Whether or not to notify about resolved alerts. source string Backlink to the sender of the notification. tags string Comma separated list of tags attached to the notifications. 3.1.100. .spec.receivers[].opsgenieConfigs[].apiKey Description The secret's key that contains the OpsGenie API key. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.101. .spec.receivers[].opsgenieConfigs[].details Description A set of arbitrary key/value pairs that provide further detail about the incident. Type array 3.1.102. .spec.receivers[].opsgenieConfigs[].details[] Description KeyValue defines a (key, value) tuple. Type object Required key value Property Type Description key string Key of the tuple. value string Value of the tuple. 3.1.103. .spec.receivers[].opsgenieConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. tlsConfig object TLS configuration for the client. 3.1.104. .spec.receivers[].opsgenieConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.105. .spec.receivers[].opsgenieConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.106. .spec.receivers[].opsgenieConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.107. .spec.receivers[].opsgenieConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.108. .spec.receivers[].opsgenieConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.109. .spec.receivers[].opsgenieConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.110. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tlsConfig object TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.111. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.112. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.113. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.114. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.115. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 3.1.116. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2.proxyConnectHeader{} Description Type array 3.1.117. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.118. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2.tlsConfig Description TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 3.1.119. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.120. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.121. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.122. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.123. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.124. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.125. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.126. .spec.receivers[].opsgenieConfigs[].httpConfig.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 3.1.127. .spec.receivers[].opsgenieConfigs[].httpConfig.proxyConnectHeader{} Description Type array 3.1.128. .spec.receivers[].opsgenieConfigs[].httpConfig.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.129. .spec.receivers[].opsgenieConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 3.1.130. .spec.receivers[].opsgenieConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.131. .spec.receivers[].opsgenieConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.132. .spec.receivers[].opsgenieConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.133. .spec.receivers[].opsgenieConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.134. .spec.receivers[].opsgenieConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.135. .spec.receivers[].opsgenieConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.136. .spec.receivers[].opsgenieConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.137. .spec.receivers[].opsgenieConfigs[].responders Description List of responders responsible for notifications. Type array 3.1.138. .spec.receivers[].opsgenieConfigs[].responders[] Description OpsGenieConfigResponder defines a responder to an incident. One of id , name or username has to be defined. Type object Required type Property Type Description id string ID of the responder. name string Name of the responder. type string Type of responder. username string Username of the responder. 3.1.139. .spec.receivers[].pagerdutyConfigs Description List of PagerDuty configurations. Type array 3.1.140. .spec.receivers[].pagerdutyConfigs[] Description PagerDutyConfig configures notifications via PagerDuty. See https://prometheus.io/docs/alerting/latest/configuration/#pagerduty_config Type object Property Type Description class string The class/type of the event. client string Client identification. clientURL string Backlink to the sender of notification. component string The part or component of the affected system that is broken. description string Description of the incident. details array Arbitrary key/value pairs that provide further detail about the incident. details[] object KeyValue defines a (key, value) tuple. group string A cluster or grouping of sources. httpConfig object HTTP client configuration. pagerDutyImageConfigs array A list of image details to attach that provide further detail about an incident. pagerDutyImageConfigs[] object PagerDutyImageConfig attaches images to an incident pagerDutyLinkConfigs array A list of link details to attach that provide further detail about an incident. pagerDutyLinkConfigs[] object PagerDutyLinkConfig attaches text links to an incident routingKey object The secret's key that contains the PagerDuty integration key (when using Events API v2). Either this field or serviceKey needs to be defined. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. sendResolved boolean Whether or not to notify about resolved alerts. serviceKey object The secret's key that contains the PagerDuty service key (when using integration type "Prometheus"). Either this field or routingKey needs to be defined. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. severity string Severity of the incident. source string Unique location of the affected system. url string The URL to send requests to. 3.1.141. .spec.receivers[].pagerdutyConfigs[].details Description Arbitrary key/value pairs that provide further detail about the incident. Type array 3.1.142. .spec.receivers[].pagerdutyConfigs[].details[] Description KeyValue defines a (key, value) tuple. Type object Required key value Property Type Description key string Key of the tuple. value string Value of the tuple. 3.1.143. .spec.receivers[].pagerdutyConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. tlsConfig object TLS configuration for the client. 3.1.144. .spec.receivers[].pagerdutyConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.145. .spec.receivers[].pagerdutyConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.146. .spec.receivers[].pagerdutyConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.147. .spec.receivers[].pagerdutyConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.148. .spec.receivers[].pagerdutyConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.149. .spec.receivers[].pagerdutyConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.150. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tlsConfig object TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.151. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.152. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.153. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.154. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.155. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 3.1.156. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2.proxyConnectHeader{} Description Type array 3.1.157. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.158. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2.tlsConfig Description TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 3.1.159. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.160. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.161. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.162. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.163. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.164. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.165. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.166. .spec.receivers[].pagerdutyConfigs[].httpConfig.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 3.1.167. .spec.receivers[].pagerdutyConfigs[].httpConfig.proxyConnectHeader{} Description Type array 3.1.168. .spec.receivers[].pagerdutyConfigs[].httpConfig.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.169. .spec.receivers[].pagerdutyConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 3.1.170. .spec.receivers[].pagerdutyConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.171. .spec.receivers[].pagerdutyConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.172. .spec.receivers[].pagerdutyConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.173. .spec.receivers[].pagerdutyConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.174. .spec.receivers[].pagerdutyConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.175. .spec.receivers[].pagerdutyConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.176. .spec.receivers[].pagerdutyConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.177. .spec.receivers[].pagerdutyConfigs[].pagerDutyImageConfigs Description A list of image details to attach that provide further detail about an incident. Type array 3.1.178. .spec.receivers[].pagerdutyConfigs[].pagerDutyImageConfigs[] Description PagerDutyImageConfig attaches images to an incident Type object Property Type Description alt string Alt is the optional alternative text for the image. href string Optional URL; makes the image a clickable link. src string Src of the image being attached to the incident 3.1.179. .spec.receivers[].pagerdutyConfigs[].pagerDutyLinkConfigs Description A list of link details to attach that provide further detail about an incident. Type array 3.1.180. .spec.receivers[].pagerdutyConfigs[].pagerDutyLinkConfigs[] Description PagerDutyLinkConfig attaches text links to an incident Type object Property Type Description alt string Text that describes the purpose of the link, and can be used as the link's text. href string Href is the URL of the link to be attached 3.1.181. .spec.receivers[].pagerdutyConfigs[].routingKey Description The secret's key that contains the PagerDuty integration key (when using Events API v2). Either this field or serviceKey needs to be defined. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.182. .spec.receivers[].pagerdutyConfigs[].serviceKey Description The secret's key that contains the PagerDuty service key (when using integration type "Prometheus"). Either this field or routingKey needs to be defined. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.183. .spec.receivers[].pushoverConfigs Description List of Pushover configurations. Type array 3.1.184. .spec.receivers[].pushoverConfigs[] Description PushoverConfig configures notifications via Pushover. See https://prometheus.io/docs/alerting/latest/configuration/#pushover_config Type object Property Type Description device string The name of a device to send the notification to expire string How long your notification will continue to be retried for, unless the user acknowledges the notification. html boolean Whether notification message is HTML or plain text. httpConfig object HTTP client configuration. message string Notification message. priority string Priority, see https://pushover.net/api#priority retry string How often the Pushover servers will send the same notification to the user. Must be at least 30 seconds. sendResolved boolean Whether or not to notify about resolved alerts. sound string The name of one of the sounds supported by device clients to override the user's default sound choice title string Notification title. token object The secret's key that contains the registered application's API token, see https://pushover.net/apps . The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Either token or tokenFile is required. tokenFile string The token file that contains the registered application's API token, see https://pushover.net/apps . Either token or tokenFile is required. It requires Alertmanager >= v0.26.0. ttl string The time to live definition for the alert notification url string A supplementary URL shown alongside the message. urlTitle string A title for supplementary URL, otherwise just the URL is shown userKey object The secret's key that contains the recipient user's user key. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Either userKey or userKeyFile is required. userKeyFile string The user key file that contains the recipient user's user key. Either userKey or userKeyFile is required. It requires Alertmanager >= v0.26.0. 3.1.185. .spec.receivers[].pushoverConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. tlsConfig object TLS configuration for the client. 3.1.186. .spec.receivers[].pushoverConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.187. .spec.receivers[].pushoverConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.188. .spec.receivers[].pushoverConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.189. .spec.receivers[].pushoverConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.190. .spec.receivers[].pushoverConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.191. .spec.receivers[].pushoverConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.192. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tlsConfig object TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.193. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.194. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.195. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.196. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.197. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 3.1.198. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2.proxyConnectHeader{} Description Type array 3.1.199. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.200. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2.tlsConfig Description TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 3.1.201. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.202. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.203. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.204. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.205. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.206. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.207. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.208. .spec.receivers[].pushoverConfigs[].httpConfig.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 3.1.209. .spec.receivers[].pushoverConfigs[].httpConfig.proxyConnectHeader{} Description Type array 3.1.210. .spec.receivers[].pushoverConfigs[].httpConfig.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.211. .spec.receivers[].pushoverConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 3.1.212. .spec.receivers[].pushoverConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.213. .spec.receivers[].pushoverConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.214. .spec.receivers[].pushoverConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.215. .spec.receivers[].pushoverConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.216. .spec.receivers[].pushoverConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.217. .spec.receivers[].pushoverConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.218. .spec.receivers[].pushoverConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.219. .spec.receivers[].pushoverConfigs[].token Description The secret's key that contains the registered application's API token, see https://pushover.net/apps . The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Either token or tokenFile is required. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.220. .spec.receivers[].pushoverConfigs[].userKey Description The secret's key that contains the recipient user's user key. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Either userKey or userKeyFile is required. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.221. .spec.receivers[].slackConfigs Description List of Slack configurations. Type array 3.1.222. .spec.receivers[].slackConfigs[] Description SlackConfig configures notifications via Slack. See https://prometheus.io/docs/alerting/latest/configuration/#slack_config Type object Property Type Description actions array A list of Slack actions that are sent with each notification. actions[] object SlackAction configures a single Slack action that is sent with each notification. See https://api.slack.com/docs/message-attachments#action_fields and https://api.slack.com/docs/message-buttons for more information. apiURL object The secret's key that contains the Slack webhook URL. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. callbackId string channel string The channel or user to send notifications to. color string fallback string fields array A list of Slack fields that are sent with each notification. fields[] object SlackField configures a single Slack field that is sent with each notification. Each field must contain a title, value, and optionally, a boolean value to indicate if the field is short enough to be displayed to other fields designated as short. See https://api.slack.com/docs/message-attachments#fields for more information. footer string httpConfig object HTTP client configuration. iconEmoji string iconURL string imageURL string linkNames boolean mrkdwnIn array (string) pretext string sendResolved boolean Whether or not to notify about resolved alerts. shortFields boolean text string thumbURL string title string titleLink string username string 3.1.223. .spec.receivers[].slackConfigs[].actions Description A list of Slack actions that are sent with each notification. Type array 3.1.224. .spec.receivers[].slackConfigs[].actions[] Description SlackAction configures a single Slack action that is sent with each notification. See https://api.slack.com/docs/message-attachments#action_fields and https://api.slack.com/docs/message-buttons for more information. Type object Required text type Property Type Description confirm object SlackConfirmationField protect users from destructive actions or particularly distinguished decisions by asking them to confirm their button click one more time. See https://api.slack.com/docs/interactive-message-field-guide#confirmation_fields for more information. name string style string text string type string url string value string 3.1.225. .spec.receivers[].slackConfigs[].actions[].confirm Description SlackConfirmationField protect users from destructive actions or particularly distinguished decisions by asking them to confirm their button click one more time. See https://api.slack.com/docs/interactive-message-field-guide#confirmation_fields for more information. Type object Required text Property Type Description dismissText string okText string text string title string 3.1.226. .spec.receivers[].slackConfigs[].apiURL Description The secret's key that contains the Slack webhook URL. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.227. .spec.receivers[].slackConfigs[].fields Description A list of Slack fields that are sent with each notification. Type array 3.1.228. .spec.receivers[].slackConfigs[].fields[] Description SlackField configures a single Slack field that is sent with each notification. Each field must contain a title, value, and optionally, a boolean value to indicate if the field is short enough to be displayed to other fields designated as short. See https://api.slack.com/docs/message-attachments#fields for more information. Type object Required title value Property Type Description short boolean title string value string 3.1.229. .spec.receivers[].slackConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. tlsConfig object TLS configuration for the client. 3.1.230. .spec.receivers[].slackConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.231. .spec.receivers[].slackConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.232. .spec.receivers[].slackConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.233. .spec.receivers[].slackConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.234. .spec.receivers[].slackConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.235. .spec.receivers[].slackConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.236. .spec.receivers[].slackConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tlsConfig object TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.237. .spec.receivers[].slackConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.238. .spec.receivers[].slackConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.239. .spec.receivers[].slackConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.240. .spec.receivers[].slackConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.241. .spec.receivers[].slackConfigs[].httpConfig.oauth2.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 3.1.242. .spec.receivers[].slackConfigs[].httpConfig.oauth2.proxyConnectHeader{} Description Type array 3.1.243. .spec.receivers[].slackConfigs[].httpConfig.oauth2.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.244. .spec.receivers[].slackConfigs[].httpConfig.oauth2.tlsConfig Description TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 3.1.245. .spec.receivers[].slackConfigs[].httpConfig.oauth2.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.246. .spec.receivers[].slackConfigs[].httpConfig.oauth2.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.247. .spec.receivers[].slackConfigs[].httpConfig.oauth2.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.248. .spec.receivers[].slackConfigs[].httpConfig.oauth2.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.249. .spec.receivers[].slackConfigs[].httpConfig.oauth2.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.250. .spec.receivers[].slackConfigs[].httpConfig.oauth2.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.251. .spec.receivers[].slackConfigs[].httpConfig.oauth2.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.252. .spec.receivers[].slackConfigs[].httpConfig.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 3.1.253. .spec.receivers[].slackConfigs[].httpConfig.proxyConnectHeader{} Description Type array 3.1.254. .spec.receivers[].slackConfigs[].httpConfig.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.255. .spec.receivers[].slackConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 3.1.256. .spec.receivers[].slackConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.257. .spec.receivers[].slackConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.258. .spec.receivers[].slackConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.259. .spec.receivers[].slackConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.260. .spec.receivers[].slackConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.261. .spec.receivers[].slackConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.262. .spec.receivers[].slackConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.263. .spec.receivers[].snsConfigs Description List of SNS configurations Type array 3.1.264. .spec.receivers[].snsConfigs[] Description SNSConfig configures notifications via AWS SNS. See https://prometheus.io/docs/alerting/latest/configuration/#sns_configs Type object Property Type Description apiURL string The SNS API URL i.e. https://sns.us-east-2.amazonaws.com . If not specified, the SNS API URL from the SNS SDK will be used. attributes object (string) SNS message attributes. httpConfig object HTTP client configuration. message string The message content of the SNS notification. phoneNumber string Phone number if message is delivered via SMS in E.164 format. If you don't specify this value, you must specify a value for the TopicARN or TargetARN. sendResolved boolean Whether or not to notify about resolved alerts. sigv4 object Configures AWS's Signature Verification 4 signing process to sign requests. subject string Subject line when the message is delivered to email endpoints. targetARN string The mobile platform endpoint ARN if message is delivered via mobile notifications. If you don't specify this value, you must specify a value for the topic_arn or PhoneNumber. topicARN string SNS topic ARN, i.e. arn:aws:sns:us-east-2:698519295917:My-Topic If you don't specify this value, you must specify a value for the PhoneNumber or TargetARN. 3.1.265. .spec.receivers[].snsConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. tlsConfig object TLS configuration for the client. 3.1.266. .spec.receivers[].snsConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.267. .spec.receivers[].snsConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.268. .spec.receivers[].snsConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.269. .spec.receivers[].snsConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.270. .spec.receivers[].snsConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.271. .spec.receivers[].snsConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.272. .spec.receivers[].snsConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tlsConfig object TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.273. .spec.receivers[].snsConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.274. .spec.receivers[].snsConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.275. .spec.receivers[].snsConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.276. .spec.receivers[].snsConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.277. .spec.receivers[].snsConfigs[].httpConfig.oauth2.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 3.1.278. .spec.receivers[].snsConfigs[].httpConfig.oauth2.proxyConnectHeader{} Description Type array 3.1.279. .spec.receivers[].snsConfigs[].httpConfig.oauth2.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.280. .spec.receivers[].snsConfigs[].httpConfig.oauth2.tlsConfig Description TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 3.1.281. .spec.receivers[].snsConfigs[].httpConfig.oauth2.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.282. .spec.receivers[].snsConfigs[].httpConfig.oauth2.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.283. .spec.receivers[].snsConfigs[].httpConfig.oauth2.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.284. .spec.receivers[].snsConfigs[].httpConfig.oauth2.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.285. .spec.receivers[].snsConfigs[].httpConfig.oauth2.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.286. .spec.receivers[].snsConfigs[].httpConfig.oauth2.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.287. .spec.receivers[].snsConfigs[].httpConfig.oauth2.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.288. .spec.receivers[].snsConfigs[].httpConfig.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 3.1.289. .spec.receivers[].snsConfigs[].httpConfig.proxyConnectHeader{} Description Type array 3.1.290. .spec.receivers[].snsConfigs[].httpConfig.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.291. .spec.receivers[].snsConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 3.1.292. .spec.receivers[].snsConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.293. .spec.receivers[].snsConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.294. .spec.receivers[].snsConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.295. .spec.receivers[].snsConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.296. .spec.receivers[].snsConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.297. .spec.receivers[].snsConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.298. .spec.receivers[].snsConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.299. .spec.receivers[].snsConfigs[].sigv4 Description Configures AWS's Signature Verification 4 signing process to sign requests. Type object Property Type Description accessKey object AccessKey is the AWS API key. If not specified, the environment variable AWS_ACCESS_KEY_ID is used. profile string Profile is the named AWS profile used to authenticate. region string Region is the AWS region. If blank, the region from the default credentials chain used. roleArn string RoleArn is the named AWS profile used to authenticate. secretKey object SecretKey is the AWS API secret. If not specified, the environment variable AWS_SECRET_ACCESS_KEY is used. 3.1.300. .spec.receivers[].snsConfigs[].sigv4.accessKey Description AccessKey is the AWS API key. If not specified, the environment variable AWS_ACCESS_KEY_ID is used. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.301. .spec.receivers[].snsConfigs[].sigv4.secretKey Description SecretKey is the AWS API secret. If not specified, the environment variable AWS_SECRET_ACCESS_KEY is used. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.302. .spec.receivers[].telegramConfigs Description List of Telegram configurations. Type array 3.1.303. .spec.receivers[].telegramConfigs[] Description TelegramConfig configures notifications via Telegram. See https://prometheus.io/docs/alerting/latest/configuration/#telegram_config Type object Required chatID Property Type Description apiURL string The Telegram API URL i.e. https://api.telegram.org . If not specified, default API URL will be used. botToken object Telegram bot token. It is mutually exclusive with botTokenFile . The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Either botToken or botTokenFile is required. botTokenFile string File to read the Telegram bot token from. It is mutually exclusive with botToken . Either botToken or botTokenFile is required. It requires Alertmanager >= v0.26.0. chatID integer The Telegram chat ID. disableNotifications boolean Disable telegram notifications httpConfig object HTTP client configuration. message string Message template parseMode string Parse mode for telegram message sendResolved boolean Whether to notify about resolved alerts. 3.1.304. .spec.receivers[].telegramConfigs[].botToken Description Telegram bot token. It is mutually exclusive with botTokenFile . The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Either botToken or botTokenFile is required. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.305. .spec.receivers[].telegramConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. tlsConfig object TLS configuration for the client. 3.1.306. .spec.receivers[].telegramConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.307. .spec.receivers[].telegramConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.308. .spec.receivers[].telegramConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.309. .spec.receivers[].telegramConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.310. .spec.receivers[].telegramConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.311. .spec.receivers[].telegramConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.312. .spec.receivers[].telegramConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tlsConfig object TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.313. .spec.receivers[].telegramConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.314. .spec.receivers[].telegramConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.315. .spec.receivers[].telegramConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.316. .spec.receivers[].telegramConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.317. .spec.receivers[].telegramConfigs[].httpConfig.oauth2.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 3.1.318. .spec.receivers[].telegramConfigs[].httpConfig.oauth2.proxyConnectHeader{} Description Type array 3.1.319. .spec.receivers[].telegramConfigs[].httpConfig.oauth2.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.320. .spec.receivers[].telegramConfigs[].httpConfig.oauth2.tlsConfig Description TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 3.1.321. .spec.receivers[].telegramConfigs[].httpConfig.oauth2.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.322. .spec.receivers[].telegramConfigs[].httpConfig.oauth2.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.323. .spec.receivers[].telegramConfigs[].httpConfig.oauth2.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.324. .spec.receivers[].telegramConfigs[].httpConfig.oauth2.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.325. .spec.receivers[].telegramConfigs[].httpConfig.oauth2.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.326. .spec.receivers[].telegramConfigs[].httpConfig.oauth2.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.327. .spec.receivers[].telegramConfigs[].httpConfig.oauth2.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.328. .spec.receivers[].telegramConfigs[].httpConfig.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 3.1.329. .spec.receivers[].telegramConfigs[].httpConfig.proxyConnectHeader{} Description Type array 3.1.330. .spec.receivers[].telegramConfigs[].httpConfig.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.331. .spec.receivers[].telegramConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 3.1.332. .spec.receivers[].telegramConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.333. .spec.receivers[].telegramConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.334. .spec.receivers[].telegramConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.335. .spec.receivers[].telegramConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.336. .spec.receivers[].telegramConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.337. .spec.receivers[].telegramConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.338. .spec.receivers[].telegramConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.339. .spec.receivers[].victoropsConfigs Description List of VictorOps configurations. Type array 3.1.340. .spec.receivers[].victoropsConfigs[] Description VictorOpsConfig configures notifications via VictorOps. See https://prometheus.io/docs/alerting/latest/configuration/#victorops_config Type object Property Type Description apiKey object The secret's key that contains the API key to use when talking to the VictorOps API. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. apiUrl string The VictorOps API URL. customFields array Additional custom fields for notification. customFields[] object KeyValue defines a (key, value) tuple. entityDisplayName string Contains summary of the alerted problem. httpConfig object The HTTP client's configuration. messageType string Describes the behavior of the alert (CRITICAL, WARNING, INFO). monitoringTool string The monitoring tool the state message is from. routingKey string A key used to map the alert to a team. sendResolved boolean Whether or not to notify about resolved alerts. stateMessage string Contains long explanation of the alerted problem. 3.1.341. .spec.receivers[].victoropsConfigs[].apiKey Description The secret's key that contains the API key to use when talking to the VictorOps API. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.342. .spec.receivers[].victoropsConfigs[].customFields Description Additional custom fields for notification. Type array 3.1.343. .spec.receivers[].victoropsConfigs[].customFields[] Description KeyValue defines a (key, value) tuple. Type object Required key value Property Type Description key string Key of the tuple. value string Value of the tuple. 3.1.344. .spec.receivers[].victoropsConfigs[].httpConfig Description The HTTP client's configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. tlsConfig object TLS configuration for the client. 3.1.345. .spec.receivers[].victoropsConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.346. .spec.receivers[].victoropsConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.347. .spec.receivers[].victoropsConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.348. .spec.receivers[].victoropsConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.349. .spec.receivers[].victoropsConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.350. .spec.receivers[].victoropsConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.351. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tlsConfig object TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.352. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.353. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.354. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.355. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.356. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 3.1.357. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2.proxyConnectHeader{} Description Type array 3.1.358. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.359. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2.tlsConfig Description TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 3.1.360. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.361. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.362. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.363. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.364. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.365. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.366. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.367. .spec.receivers[].victoropsConfigs[].httpConfig.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 3.1.368. .spec.receivers[].victoropsConfigs[].httpConfig.proxyConnectHeader{} Description Type array 3.1.369. .spec.receivers[].victoropsConfigs[].httpConfig.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.370. .spec.receivers[].victoropsConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 3.1.371. .spec.receivers[].victoropsConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.372. .spec.receivers[].victoropsConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.373. .spec.receivers[].victoropsConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.374. .spec.receivers[].victoropsConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.375. .spec.receivers[].victoropsConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.376. .spec.receivers[].victoropsConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.377. .spec.receivers[].victoropsConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.378. .spec.receivers[].webexConfigs Description List of Webex configurations. Type array 3.1.379. .spec.receivers[].webexConfigs[] Description WebexConfig configures notification via Cisco Webex See https://prometheus.io/docs/alerting/latest/configuration/#webex_config Type object Required roomID Property Type Description apiURL string The Webex Teams API URL i.e. https://webexapis.com/v1/messages httpConfig object The HTTP client's configuration. You must use this configuration to supply the bot token as part of the HTTP Authorization header. message string Message template roomID string ID of the Webex Teams room where to send the messages. sendResolved boolean Whether to notify about resolved alerts. 3.1.380. .spec.receivers[].webexConfigs[].httpConfig Description The HTTP client's configuration. You must use this configuration to supply the bot token as part of the HTTP Authorization header. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. tlsConfig object TLS configuration for the client. 3.1.381. .spec.receivers[].webexConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.382. .spec.receivers[].webexConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.383. .spec.receivers[].webexConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.384. .spec.receivers[].webexConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.385. .spec.receivers[].webexConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.386. .spec.receivers[].webexConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.387. .spec.receivers[].webexConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tlsConfig object TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.388. .spec.receivers[].webexConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.389. .spec.receivers[].webexConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.390. .spec.receivers[].webexConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.391. .spec.receivers[].webexConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.392. .spec.receivers[].webexConfigs[].httpConfig.oauth2.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 3.1.393. .spec.receivers[].webexConfigs[].httpConfig.oauth2.proxyConnectHeader{} Description Type array 3.1.394. .spec.receivers[].webexConfigs[].httpConfig.oauth2.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.395. .spec.receivers[].webexConfigs[].httpConfig.oauth2.tlsConfig Description TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 3.1.396. .spec.receivers[].webexConfigs[].httpConfig.oauth2.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.397. .spec.receivers[].webexConfigs[].httpConfig.oauth2.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.398. .spec.receivers[].webexConfigs[].httpConfig.oauth2.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.399. .spec.receivers[].webexConfigs[].httpConfig.oauth2.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.400. .spec.receivers[].webexConfigs[].httpConfig.oauth2.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.401. .spec.receivers[].webexConfigs[].httpConfig.oauth2.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.402. .spec.receivers[].webexConfigs[].httpConfig.oauth2.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.403. .spec.receivers[].webexConfigs[].httpConfig.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 3.1.404. .spec.receivers[].webexConfigs[].httpConfig.proxyConnectHeader{} Description Type array 3.1.405. .spec.receivers[].webexConfigs[].httpConfig.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.406. .spec.receivers[].webexConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 3.1.407. .spec.receivers[].webexConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.408. .spec.receivers[].webexConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.409. .spec.receivers[].webexConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.410. .spec.receivers[].webexConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.411. .spec.receivers[].webexConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.412. .spec.receivers[].webexConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.413. .spec.receivers[].webexConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.414. .spec.receivers[].webhookConfigs Description List of webhook configurations. Type array 3.1.415. .spec.receivers[].webhookConfigs[] Description WebhookConfig configures notifications via a generic receiver supporting the webhook payload. See https://prometheus.io/docs/alerting/latest/configuration/#webhook_config Type object Property Type Description httpConfig object HTTP client configuration. maxAlerts integer Maximum number of alerts to be sent per webhook message. When 0, all alerts are included. sendResolved boolean Whether or not to notify about resolved alerts. url string The URL to send HTTP POST requests to. urlSecret takes precedence over url . One of urlSecret and url should be defined. urlSecret object The secret's key that contains the webhook URL to send HTTP requests to. urlSecret takes precedence over url . One of urlSecret and url should be defined. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. 3.1.416. .spec.receivers[].webhookConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. tlsConfig object TLS configuration for the client. 3.1.417. .spec.receivers[].webhookConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.418. .spec.receivers[].webhookConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.419. .spec.receivers[].webhookConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.420. .spec.receivers[].webhookConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.421. .spec.receivers[].webhookConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.422. .spec.receivers[].webhookConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.423. .spec.receivers[].webhookConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tlsConfig object TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.424. .spec.receivers[].webhookConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.425. .spec.receivers[].webhookConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.426. .spec.receivers[].webhookConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.427. .spec.receivers[].webhookConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.428. .spec.receivers[].webhookConfigs[].httpConfig.oauth2.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 3.1.429. .spec.receivers[].webhookConfigs[].httpConfig.oauth2.proxyConnectHeader{} Description Type array 3.1.430. .spec.receivers[].webhookConfigs[].httpConfig.oauth2.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.431. .spec.receivers[].webhookConfigs[].httpConfig.oauth2.tlsConfig Description TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 3.1.432. .spec.receivers[].webhookConfigs[].httpConfig.oauth2.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.433. .spec.receivers[].webhookConfigs[].httpConfig.oauth2.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.434. .spec.receivers[].webhookConfigs[].httpConfig.oauth2.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.435. .spec.receivers[].webhookConfigs[].httpConfig.oauth2.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.436. .spec.receivers[].webhookConfigs[].httpConfig.oauth2.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.437. .spec.receivers[].webhookConfigs[].httpConfig.oauth2.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.438. .spec.receivers[].webhookConfigs[].httpConfig.oauth2.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.439. .spec.receivers[].webhookConfigs[].httpConfig.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 3.1.440. .spec.receivers[].webhookConfigs[].httpConfig.proxyConnectHeader{} Description Type array 3.1.441. .spec.receivers[].webhookConfigs[].httpConfig.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.442. .spec.receivers[].webhookConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 3.1.443. .spec.receivers[].webhookConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.444. .spec.receivers[].webhookConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.445. .spec.receivers[].webhookConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.446. .spec.receivers[].webhookConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.447. .spec.receivers[].webhookConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.448. .spec.receivers[].webhookConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.449. .spec.receivers[].webhookConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.450. .spec.receivers[].webhookConfigs[].urlSecret Description The secret's key that contains the webhook URL to send HTTP requests to. urlSecret takes precedence over url . One of urlSecret and url should be defined. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.451. .spec.receivers[].wechatConfigs Description List of WeChat configurations. Type array 3.1.452. .spec.receivers[].wechatConfigs[] Description WeChatConfig configures notifications via WeChat. See https://prometheus.io/docs/alerting/latest/configuration/#wechat_config Type object Property Type Description agentID string apiSecret object The secret's key that contains the WeChat API key. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. apiURL string The WeChat API URL. corpID string The corp id for authentication. httpConfig object HTTP client configuration. message string API request data as defined by the WeChat API. messageType string sendResolved boolean Whether or not to notify about resolved alerts. toParty string toTag string toUser string 3.1.453. .spec.receivers[].wechatConfigs[].apiSecret Description The secret's key that contains the WeChat API key. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.454. .spec.receivers[].wechatConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. tlsConfig object TLS configuration for the client. 3.1.455. .spec.receivers[].wechatConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.456. .spec.receivers[].wechatConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.457. .spec.receivers[].wechatConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.458. .spec.receivers[].wechatConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.459. .spec.receivers[].wechatConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.460. .spec.receivers[].wechatConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.461. .spec.receivers[].wechatConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tlsConfig object TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.462. .spec.receivers[].wechatConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.463. .spec.receivers[].wechatConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.464. .spec.receivers[].wechatConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.465. .spec.receivers[].wechatConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.466. .spec.receivers[].wechatConfigs[].httpConfig.oauth2.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 3.1.467. .spec.receivers[].wechatConfigs[].httpConfig.oauth2.proxyConnectHeader{} Description Type array 3.1.468. .spec.receivers[].wechatConfigs[].httpConfig.oauth2.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.469. .spec.receivers[].wechatConfigs[].httpConfig.oauth2.tlsConfig Description TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 3.1.470. .spec.receivers[].wechatConfigs[].httpConfig.oauth2.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.471. .spec.receivers[].wechatConfigs[].httpConfig.oauth2.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.472. .spec.receivers[].wechatConfigs[].httpConfig.oauth2.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.473. .spec.receivers[].wechatConfigs[].httpConfig.oauth2.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.474. .spec.receivers[].wechatConfigs[].httpConfig.oauth2.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.475. .spec.receivers[].wechatConfigs[].httpConfig.oauth2.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.476. .spec.receivers[].wechatConfigs[].httpConfig.oauth2.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.477. .spec.receivers[].wechatConfigs[].httpConfig.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 3.1.478. .spec.receivers[].wechatConfigs[].httpConfig.proxyConnectHeader{} Description Type array 3.1.479. .spec.receivers[].wechatConfigs[].httpConfig.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.480. .spec.receivers[].wechatConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 3.1.481. .spec.receivers[].wechatConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.482. .spec.receivers[].wechatConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.483. .spec.receivers[].wechatConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.484. .spec.receivers[].wechatConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.485. .spec.receivers[].wechatConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 3.1.486. .spec.receivers[].wechatConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.487. .spec.receivers[].wechatConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 3.1.488. .spec.route Description The Alertmanager route definition for alerts matching the resource's namespace. If present, it will be added to the generated Alertmanager configuration as a first-level route. Type object Property Type Description activeTimeIntervals array (string) ActiveTimeIntervals is a list of TimeInterval names when this route should be active. continue boolean Boolean indicating whether an alert should continue matching subsequent sibling nodes. It will always be overridden to true for the first-level route by the Prometheus operator. groupBy array (string) List of labels to group by. Labels must not be repeated (unique list). Special label "... " (aggregate by all possible labels), if provided, must be the only element in the list. groupInterval string How long to wait before sending an updated notification. Must match the regular expression`^(( )y)?(([0-9] )w)?(( )d)?(([0-9] )h)?(( )m)?(([0-9] )s)?(([0-9]+)ms)?USD` Example: "5m" groupWait string How long to wait before sending the initial notification. Must match the regular expression`^(( )y)?(([0-9] )w)?(( )d)?(([0-9] )h)?(( )m)?(([0-9] )s)?(([0-9]+)ms)?USD` Example: "30s" matchers array List of matchers that the alert's labels should match. For the first level route, the operator removes any existing equality and regexp matcher on the namespace label and adds a namespace: <object namespace> matcher. matchers[] object Matcher defines how to match on alert's labels. muteTimeIntervals array (string) Note: this comment applies to the field definition above but appears below otherwise it gets included in the generated manifest. CRD schema doesn't support self-referential types for now (see https://github.com/kubernetes/kubernetes/issues/62872 ). We have to use an alternative type to circumvent the limitation. The downside is that the Kube API can't validate the data beyond the fact that it is a valid JSON representation. MuteTimeIntervals is a list of TimeInterval names that will mute this route when matched. receiver string Name of the receiver for this route. If not empty, it should be listed in the receivers field. repeatInterval string How long to wait before repeating the last notification. Must match the regular expression`^(( )y)?(([0-9] )w)?(( )d)?(([0-9] )h)?(( )m)?(([0-9] )s)?(([0-9]+)ms)?USD` Example: "4h" routes array (undefined) Child routes. 3.1.489. .spec.route.matchers Description List of matchers that the alert's labels should match. For the first level route, the operator removes any existing equality and regexp matcher on the namespace label and adds a namespace: <object namespace> matcher. Type array 3.1.490. .spec.route.matchers[] Description Matcher defines how to match on alert's labels. Type object Required name Property Type Description matchType string Match operator, one of = (equal to), != (not equal to), =~ (regex match) or !~ (not regex match). Negative operators ( != and !~ ) require Alertmanager >= v0.22.0. name string Label to match. value string Label value to match. 3.1.491. .spec.timeIntervals Description List of TimeInterval specifying when the routes should be muted or active. Type array 3.1.492. .spec.timeIntervals[] Description TimeInterval specifies the periods in time when notifications will be muted or active. Type object Required name Property Type Description name string Name of the time interval. timeIntervals array TimeIntervals is a list of TimePeriod. timeIntervals[] object TimePeriod describes periods of time. 3.1.493. .spec.timeIntervals[].timeIntervals Description TimeIntervals is a list of TimePeriod. Type array 3.1.494. .spec.timeIntervals[].timeIntervals[] Description TimePeriod describes periods of time. Type object Property Type Description daysOfMonth array DaysOfMonth is a list of DayOfMonthRange daysOfMonth[] object DayOfMonthRange is an inclusive range of days of the month beginning at 1 months array (string) Months is a list of MonthRange times array Times is a list of TimeRange times[] object TimeRange defines a start and end time in 24hr format weekdays array (string) Weekdays is a list of WeekdayRange years array (string) Years is a list of YearRange 3.1.495. .spec.timeIntervals[].timeIntervals[].daysOfMonth Description DaysOfMonth is a list of DayOfMonthRange Type array 3.1.496. .spec.timeIntervals[].timeIntervals[].daysOfMonth[] Description DayOfMonthRange is an inclusive range of days of the month beginning at 1 Type object Property Type Description end integer End of the inclusive range start integer Start of the inclusive range 3.1.497. .spec.timeIntervals[].timeIntervals[].times Description Times is a list of TimeRange Type array 3.1.498. .spec.timeIntervals[].timeIntervals[].times[] Description TimeRange defines a start and end time in 24hr format Type object Property Type Description endTime string EndTime is the end time in 24hr format. startTime string StartTime is the start time in 24hr format. 3.2. API endpoints The following API endpoints are available: /apis/monitoring.coreos.com/v1beta1/alertmanagerconfigs GET : list objects of kind AlertmanagerConfig /apis/monitoring.coreos.com/v1beta1/namespaces/{namespace}/alertmanagerconfigs DELETE : delete collection of AlertmanagerConfig GET : list objects of kind AlertmanagerConfig POST : create an AlertmanagerConfig /apis/monitoring.coreos.com/v1beta1/namespaces/{namespace}/alertmanagerconfigs/{name} DELETE : delete an AlertmanagerConfig GET : read the specified AlertmanagerConfig PATCH : partially update the specified AlertmanagerConfig PUT : replace the specified AlertmanagerConfig 3.2.1. /apis/monitoring.coreos.com/v1beta1/alertmanagerconfigs HTTP method GET Description list objects of kind AlertmanagerConfig Table 3.1. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerConfigList schema 401 - Unauthorized Empty 3.2.2. /apis/monitoring.coreos.com/v1beta1/namespaces/{namespace}/alertmanagerconfigs HTTP method DELETE Description delete collection of AlertmanagerConfig Table 3.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind AlertmanagerConfig Table 3.3. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerConfigList schema 401 - Unauthorized Empty HTTP method POST Description create an AlertmanagerConfig Table 3.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.5. Body parameters Parameter Type Description body AlertmanagerConfig schema Table 3.6. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerConfig schema 201 - Created AlertmanagerConfig schema 202 - Accepted AlertmanagerConfig schema 401 - Unauthorized Empty 3.2.3. /apis/monitoring.coreos.com/v1beta1/namespaces/{namespace}/alertmanagerconfigs/{name} Table 3.7. Global path parameters Parameter Type Description name string name of the AlertmanagerConfig HTTP method DELETE Description delete an AlertmanagerConfig Table 3.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified AlertmanagerConfig Table 3.10. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified AlertmanagerConfig Table 3.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.12. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified AlertmanagerConfig Table 3.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.14. Body parameters Parameter Type Description body AlertmanagerConfig schema Table 3.15. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerConfig schema 201 - Created AlertmanagerConfig schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/monitoring_apis/alertmanagerconfig-monitoring-coreos-com-v1beta1
Provisioning APIs
Provisioning APIs OpenShift Container Platform 4.13 Reference guide for provisioning APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/provisioning_apis/index
Chapter 14. Installation configuration parameters for GCP
Chapter 14. Installation configuration parameters for GCP Before you deploy an OpenShift Container Platform cluster on Google Cloud Platform (GCP), you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml file, you provide values for the required parameters through the command line. You can then modify the install-config.yaml file to customize your cluster further. 14.1. Available installation configuration parameters for GCP The following tables specify the required, optional, and GCP-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 14.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 14.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 14.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 14.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 14.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 14.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Mint , Passthrough , Manual or an empty string ( "" ). [1] Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Note If you are installing on GCP into a shared virtual private cloud (VPC), credentialsMode must be set to Passthrough or Manual . Important Setting this parameter to Manual enables alternatives to storing administrator-level secrets in the kube-system project, which require additional configuration steps. For more information, see "Alternatives to storing administrator-level secrets in the kube-system project". 14.1.4. Additional Google Cloud Platform (GCP) configuration parameters Additional GCP configuration parameters are described in the following table: Table 14.4. Additional GCP parameters Parameter Description Values Optional. By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image that is used to boot control plane machines. You can override the default behavior by specifying the location of a custom RHCOS image that the installation program is to use for control plane machines only. String. The name of GCP project where the image is located. The name of the custom RHCOS image that the installation program is to use to boot control plane machines. If you use controlPlane.platform.gcp.osImage.project , this field is required. String. The name of the RHCOS image. Optional. By default, the installation program downloads and installs the RHCOS image that is used to boot compute machines. You can override the default behavior by specifying the location of a custom RHCOS image that the installation program is to use for compute machines only. String. The name of GCP project where the image is located. The name of the custom RHCOS image that the installation program is to use to boot compute machines. If you use compute.platform.gcp.osImage.project , this field is required. String. The name of the RHCOS image. The name of the existing Virtual Private Cloud (VPC) where you want to deploy your cluster. If you want to deploy your cluster into a shared VPC, you must set platform.gcp.networkProjectID with the name of the GCP project that contains the shared VPC. String. Optional. The name of the GCP project that contains the shared VPC where you want to deploy your cluster. String. The name of the GCP project where the installation program installs the cluster. String. The name of the GCP region that hosts your cluster. Any valid region name, such as us-central1 . The name of the existing subnet where you want to deploy your control plane machines. The subnet name. The name of the existing subnet where you want to deploy your compute machines. The subnet name. The availability zones where the installation program creates machines. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . Important When running your cluster on GCP 64-bit ARM infrastructures, ensure that you use a zone where Ampere Altra Arm CPU's are available. You can find which zones are compatible with 64-bit ARM processors in the "GCP availability zones" link. The size of the disk in gigabytes (GB). Any size between 16 GB and 65536 GB. The GCP disk type . The default disk type for all machines. Control plane nodes must use the pd-ssd disk type. Compute nodes can use the pd-ssd , pd-balanced , or pd-standard disk types. Optional. By default, the installation program downloads and installs the RHCOS image that is used to boot control plane and compute machines. You can override the default behavior by specifying the location of a custom RHCOS image that the installation program is to use for both types of machines. String. The name of GCP project where the image is located. The name of the custom RHCOS image that the installation program is to use to boot control plane and compute machines. If you use platform.gcp.defaultMachinePlatform.osImage.project , this field is required. String. The name of the RHCOS image. Optional. Additional network tags to add to the control plane and compute machines. One or more strings, for example network-tag1 . The GCP machine type for control plane and compute machines. The GCP machine type, for example n1-standard-4 . The name of the customer managed encryption key to be used for machine disk encryption. The encryption key name. The name of the Key Management Service (KMS) key ring to which the KMS key belongs. The KMS key ring name. The GCP location in which the KMS key ring exists. The GCP location. The ID of the project in which the KMS key ring exists. This value defaults to the value of the platform.gcp.projectID parameter if it is not set. The GCP project ID. The GCP service account used for the encryption request for control plane and compute machines. If absent, the Compute Engine default service account is used. For more information about GCP service accounts, see Google's documentation on service accounts . The GCP service account email, for example <service_account_name>@<project_id>.iam.gserviceaccount.com . Whether to enable Shielded VM secure boot for all machines in the cluster. Shielded VMs have additional security protocols such as secure boot, firmware and integrity monitoring, and rootkit protection. For more information on Shielded VMs, see Google's documentation on Shielded VMs . Enabled or Disabled . The default value is Disabled . Whether to use Confidential VMs for all machines in the cluster. Confidential VMs provide encryption for data during processing. For more information on Confidential computing, see Google's documentation on Confidential computing . Enabled or Disabled . The default value is Disabled . Specifies the behavior of all VMs during a host maintenance event, such as a software or hardware update. For Confidential VMs, this parameter must be set to Terminate . Confidential VMs do not support live VM migration. Terminate or Migrate . The default value is Migrate . The name of the customer managed encryption key to be used for control plane machine disk encryption. The encryption key name. For control plane machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. For control plane machines, the GCP location in which the key ring exists. For more information about KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. For control plane machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. The GCP service account used for the encryption request for control plane machines. If absent, the Compute Engine default service account is used. For more information about GCP service accounts, see Google's documentation on service accounts . The GCP service account email, for example <service_account_name>@<project_id>.iam.gserviceaccount.com . The size of the disk in gigabytes (GB). This value applies to control plane machines. Any integer between 16 and 65536. The GCP disk type for control plane machines. Control plane machines must use the pd-ssd disk type, which is the default. Optional. Additional network tags to add to the control plane machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.tags parameter for control plane machines. One or more strings, for example control-plane-tag1 . The GCP machine type for control plane machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.type parameter. The GCP machine type, for example n1-standard-4 . The availability zones where the installation program creates control plane machines. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . Important When running your cluster on GCP 64-bit ARM infrastructures, ensure that you use a zone where Ampere Altra Arm CPU's are available. You can find which zones are compatible with 64-bit ARM processors in the "GCP availability zones" link. Whether to enable Shielded VM secure boot for control plane machines. Shielded VMs have additional security protocols such as secure boot, firmware and integrity monitoring, and rootkit protection. For more information on Shielded VMs, see Google's documentation on Shielded VMs . Enabled or Disabled . The default value is Disabled . Whether to enable Confidential VMs for control plane machines. Confidential VMs provide encryption for data while it is being processed. For more information on Confidential VMs, see Google's documentation on Confidential Computing . Enabled or Disabled . The default value is Disabled . Specifies the behavior of control plane VMs during a host maintenance event, such as a software or hardware update. For Confidential VMs, this parameter must be set to Terminate . Confidential VMs do not support live VM migration. Terminate or Migrate . The default value is Migrate . The name of the customer managed encryption key to be used for compute machine disk encryption. The encryption key name. For compute machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. For compute machines, the GCP location in which the key ring exists. For more information about KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. For compute machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. The GCP service account used for the encryption request for compute machines. If this value is not set, the Compute Engine default service account is used. For more information about GCP service accounts, see Google's documentation on service accounts . The GCP service account email, for example <service_account_name>@<project_id>.iam.gserviceaccount.com . The size of the disk in gigabytes (GB). This value applies to compute machines. Any integer between 16 and 65536. The GCP disk type for compute machines. pd-ssd , pd-standard , or pd-balanced . The default is pd-ssd . Optional. Additional network tags to add to the compute machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.tags parameter for compute machines. One or more strings, for example compute-network-tag1 . The GCP machine type for compute machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.type parameter. The GCP machine type, for example n1-standard-4 . The availability zones where the installation program creates compute machines. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . Important When running your cluster on GCP 64-bit ARM infrastructures, ensure that you use a zone where Ampere Altra Arm CPU's are available. You can find which zones are compatible with 64-bit ARM processors in the "GCP availability zones" link. Whether to enable Shielded VM secure boot for compute machines. Shielded VMs have additional security protocols such as secure boot, firmware and integrity monitoring, and rootkit protection. For more information on Shielded VMs, see Google's documentation on Shielded VMs . Enabled or Disabled . The default value is Disabled . Whether to enable Confidential VMs for compute machines. Confidential VMs provide encryption for data while it is being processed. For more information on Confidential VMs, see Google's documentation on Confidential Computing . Enabled or Disabled . The default value is Disabled . Specifies the behavior of compute VMs during a host maintenance event, such as a software or hardware update. For Confidential VMs, this parameter must be set to Terminate . Confidential VMs do not support live VM migration. Terminate or Migrate . The default value is Migrate .
[ "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:", "controlPlane: platform: gcp: osImage: project:", "controlPlane: platform: gcp: osImage: name:", "compute: platform: gcp: osImage: project:", "compute: platform: gcp: osImage: name:", "platform: gcp: network:", "platform: gcp: networkProjectID:", "platform: gcp: projectID:", "platform: gcp: region:", "platform: gcp: controlPlaneSubnet:", "platform: gcp: computeSubnet:", "platform: gcp: defaultMachinePlatform: zones:", "platform: gcp: defaultMachinePlatform: osDisk: diskSizeGB:", "platform: gcp: defaultMachinePlatform: osDisk: diskType:", "platform: gcp: defaultMachinePlatform: osImage: project:", "platform: gcp: defaultMachinePlatform: osImage: name:", "platform: gcp: defaultMachinePlatform: tags:", "platform: gcp: defaultMachinePlatform: type:", "platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: name:", "platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: keyRing:", "platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: location:", "platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: projectID:", "platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKeyServiceAccount:", "platform: gcp: defaultMachinePlatform: secureBoot:", "platform: gcp: defaultMachinePlatform: confidentialCompute:", "platform: gcp: defaultMachinePlatform: onHostMaintenance:", "controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: name:", "controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: keyRing:", "controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: location:", "controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: projectID:", "controlPlane: platform: gcp: osDisk: encryptionKey: kmsKeyServiceAccount:", "controlPlane: platform: gcp: osDisk: diskSizeGB:", "controlPlane: platform: gcp: osDisk: diskType:", "controlPlane: platform: gcp: tags:", "controlPlane: platform: gcp: type:", "controlPlane: platform: gcp: zones:", "controlPlane: platform: gcp: secureBoot:", "controlPlane: platform: gcp: confidentialCompute:", "controlPlane: platform: gcp: onHostMaintenance:", "compute: platform: gcp: osDisk: encryptionKey: kmsKey: name:", "compute: platform: gcp: osDisk: encryptionKey: kmsKey: keyRing:", "compute: platform: gcp: osDisk: encryptionKey: kmsKey: location:", "compute: platform: gcp: osDisk: encryptionKey: kmsKey: projectID:", "compute: platform: gcp: osDisk: encryptionKey: kmsKeyServiceAccount:", "compute: platform: gcp: osDisk: diskSizeGB:", "compute: platform: gcp: osDisk: diskType:", "compute: platform: gcp: tags:", "compute: platform: gcp: type:", "compute: platform: gcp: zones:", "compute: platform: gcp: secureBoot:", "compute: platform: gcp: confidentialCompute:", "compute: platform: gcp: onHostMaintenance:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_gcp/installation-config-parameters-gcp
16.7. Detaching a Tier from a Volume (Deprecated)
16.7. Detaching a Tier from a Volume (Deprecated) Warning Tiering is considered deprecated as of Red Hat Gluster Storage 3.5. Red Hat no longer recommends its use, and does not support tiering in new deployments and existing deployments that upgrade to Red Hat Gluster Storage 3.5.3. To detach a tier, perform the following steps: Start the detach tier by executing the following command: # gluster volume tier VOLNAME detach start For example, Monitor the status of detach tier until the status displays the status as complete. # gluster volume tier VOLNAME detach status For example, Note It is possible that some files are not migrated to the cold tier on a detach operation for various reasons like POSIX locks being held on them. Check for files on the hot tier bricks and you can either manually move the files, or turn off applications (which would presumably unlock the files) and stop/start detach tier, to retry. When the tier is detached successfully as shown in the status command, run the following command to commit the tier detach: # gluster volume tier VOLNAME detach commit For example, Note When you run tier detach commit or tier detach force , ongoing I/O operations may fail with a Transport endpoint is not connected error. After the detach tier commit is completed, you can verify that the volume is no longer a tier volume by running gluster volume info command. 16.7.1. Detaching a Tier of a Geo-replicated Volume (Deprecated) Warning Tiering is considered deprecated as of Red Hat Gluster Storage 3.5. Red Hat no longer recommends its use, and does not support tiering in new deployments and existing deployments that upgrade to Red Hat Gluster Storage 3.5.3. Start the detach tier by executing the following command: # gluster volume tier VOLNAME detach start For example, Monitor the status of detach tier until the status displays the status as complete. # gluster volume tier VOLNAME detach status For example, Note There could be some number of files that were not moved. Such files may have been locked by the user, and that prevented them from moving to the cold tier on the detach operation. You must check for such files. If you find any such files, you can either manually move the files, or turn off applications (which would presumably unlock the files) and stop/start detach tier, to retry. Set a checkpoint on a geo-replication session to ensure that all the data in that cold-tier is synced to the slave. For more information on geo-replication checkpoints, see Section 10.4.4.1, "Geo-replication Checkpoints" . # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config checkpoint now For example, Use the following command to verify the checkpoint completion for the geo-replication session # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status detail Stop geo-replication between the master and slave, using the following command: # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop For example: Commit the detach tier operation using the following command: # gluster volume tier VOLNAME detach commit For example, After the detach tier commit is completed, you can verify that the volume is no longer a tier volume by running gluster volume info command. Restart the geo-replication sessions, using the following command: # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start For example,
[ "gluster volume tier test-volume detach start", "gluster volume tier test-volume detach status Node Rebalanced-files size scanned failures skipped status run time in secs -------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 0 0Bytes 0 0 0 completed 0.00 server1 0 0Bytes 0 0 0 completed 1.00 server2 0 0Bytes 0 0 0 completed 0.00 server1 0 0Bytes 0 0 0 completed server2 0 0Bytes 0 0 0 completed", "gluster volume tier test-volume detach commit Removing tier can result in data loss. Do you want to Continue? (y/n) y volume detach-tier commit: success Check the detached bricks to ensure all files are migrated. If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick.", "gluster volume tier test-volume detach start", "gluster volume tier test-volume detach status Node Rebalanced-files size scanned failures skipped status run time in secs -------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 0 0Bytes 0 0 0 completed 0.00 server1 0 0Bytes 0 0 0 completed 1.00 server2 0 0Bytes 0 0 0 completed 0.00 server1 0 0Bytes 0 0 0 completed server2 0 0Bytes 0 0 0 completed", "gluster volume geo-replication Volume1 example.com::slave-vol config checkpoint now", "gluster volume geo-replication Volume1 example.com::slave-vol stop", "gluster volume tier test-volume detach commit Removing tier can result in data loss. Do you want to Continue? (y/n) y volume detach-tier commit: success Check the detached bricks to ensure all files are migrated. If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick.", "gluster volume geo-replication Volume1 example.com::slave-vol start" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/chap-Managing_Data_Tiering-Detach_Tier
Chapter 8. Installing a cluster on AWS into an existing VPC
Chapter 8. Installing a cluster on AWS into an existing VPC In OpenShift Container Platform version 4.12, you can install a cluster into an existing Amazon Virtual Private Cloud (VPC) on Amazon Web Services (AWS). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 8.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 8.2. About using a custom VPC In OpenShift Container Platform 4.12, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 8.2.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Amazon VPC console wizard configurations and Work with VPCs and subnets in the AWS documentation for more information on creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: Create a public and private subnet for each availability zone that your cluster uses. Each availability zone can contain no more than one public and one private subnet. For an example of this type of configuration, see VPC with public and private subnets (NAT) in the AWS documentation. Record each subnet ID. Completing the installation requires that you enter these values in the platform section of the install-config.yaml file. See Finding a subnet ID in the AWS documentation. The VPC's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The subnet CIDR blocks must belong to the machine CIDR that you specify. The VPC must have a public internet gateway attached to it. For each availability zone: The public subnet requires a route to the internet gateway. The public subnet requires a NAT gateway with an EIP address. The private subnet requires a route to the NAT gateway in public subnet. The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone field in the install-config.yaml file. If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 8.2.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 8.2.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 8.2.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 8.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 8.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 8.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 8.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select AWS as the platform to target. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 8.6.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 8.6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 8.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 8.6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 8.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 8.6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 8.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings platform.aws.lbType Required to set the NLB load balancer type in AWS. Valid values are Classic or NLB . If no value is specified, the installation program defaults to Classic . The installation program sets the value provided here in the ingress cluster configuration object. If you do not specify a load balancer type for other Ingress Controllers, they use the type set in this parameter. Classic or NLB . The default value is Classic . publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 8.6.1.4. Optional AWS configuration parameters Optional AWS configuration parameters are described in the following table: Table 8.4. Optional AWS parameters Parameter Description Values compute.platform.aws.amiID The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. compute.platform.aws.iamRole A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. The name of a valid AWS IAM role. compute.platform.aws.rootVolume.iops The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. Integer, for example 4000 . compute.platform.aws.rootVolume.size The size in GiB of the root volume. Integer, for example 500 . compute.platform.aws.rootVolume.type The type of the root volume. Valid AWS EBS volume type , such as io1 . compute.platform.aws.rootVolume.kmsKeyARN The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of worker nodes with a specific KMS key. Valid key ID or the key ARN . compute.platform.aws.type The EC2 instance type for the compute machines. Valid AWS instance type, such as m4.2xlarge . See the Supported AWS machine types table that follows. compute.platform.aws.zones The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . compute.aws.region The AWS region that the installation program creates compute resources in. Any valid AWS region , such as us-east-1 . You can use the AWS CLI to access the regions available based on your selected instance type. For example: aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge Important When running on ARM based AWS instances, ensure that you enter a region where AWS Graviton processors are available. See Global availability map in the AWS documentation. Currently, AWS Graviton3 processors are only available in some regions. controlPlane.platform.aws.amiID The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. controlPlane.platform.aws.iamRole A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. The name of a valid AWS IAM role. controlPlane.platform.aws.rootVolume.kmsKeyARN The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of control plane nodes with a specific KMS key. Valid key ID and the key ARN . controlPlane.platform.aws.type The EC2 instance type for the control plane machines. Valid AWS instance type, such as m6i.xlarge . See the Supported AWS machine types table that follows. controlPlane.platform.aws.zones The availability zones where the installation program creates machines for the control plane machine pool. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . controlPlane.aws.region The AWS region that the installation program creates control plane resources in. Valid AWS region , such as us-east-1 . platform.aws.amiID The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. platform.aws.hostedZone An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. String, for example Z3URY6TWQ91KVV . platform.aws.serviceEndpoints.name The AWS service endpoint name. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. Valid AWS service endpoint name. platform.aws.serviceEndpoints.url The AWS service endpoint URL. The URL must use the https protocol and the host must trust the certificate. Valid AWS service endpoint URL. platform.aws.userTags A map of keys and values that the installation program adds as tags to all resources that it creates. Any valid YAML map, such as key value pairs in the <key>: <value> format. For more information about AWS tags, see Tagging Your Amazon EC2 Resources in the AWS documentation. Note You can add up to 25 user defined tags during installation. The remaining 25 tags are reserved for OpenShift Container Platform. platform.aws.propagateUserTags A flag that directs in-cluster Operators to include the specified user tags in the tags of the AWS resources that the Operators create. Boolean values, for example true or false . platform.aws.subnets If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same machineNetwork[].cidr ranges that you specify. For a standard cluster, specify a public and a private subnet for each availability zone. For a private cluster, specify a private subnet for each availability zone. Valid subnet IDs. 8.6.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 8.5. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 8.6.3. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 8.1. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 8.6.4. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) ARM64 instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 8.2. Machine types based on 64-bit ARM architecture c6g.* c7g.* m6g.* m7g.* r8g.* 8.6.5. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{"auths": ...}' 22 1 12 14 22 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode, instead of having the CCO dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 17 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 18 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 19 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 8.6.6. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 8.7. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 8.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 8.9. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 8.10. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 8.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service. 8.12. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{\"auths\": ...}' 22", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_aws/installing-aws-vpc
D.14. Datatype Hierarchy View
D.14. Datatype Hierarchy View To open Teiid Designer's Datatype Hierarchy view, click the main menu's Window > Show View > Other... and then click the Teiid Designer > Datatypes view in the dialog. Figure D.25. Datatype Hierarchy View The following table shows the mapping between Teiid Designer Types and JBoss Data Virtualization Runtime Types. Table D.1. Corresponding Runtime Types Teiid Designer Type Java Runtime Type anyURI java.lang.String base64Binary java.lang.String bigdecimal java.math.BigDecimal biginteger java.math.BigInteger blob java.sql.Blob [a] boolean java.lang.Boolean byte java.lang.Byte char java.lang.Character clob java.sql.Clob [b] date java.sql.Date dateTime java.sql.Timestamp decimal java.math.BigDecimal double java.lang.Double duration java.lang.String ENTITIES java.lang.String ENTITY java.lang.String float java.lang.Float gDay java.math.BigInteger gMonth java.math.BigInteger gMonthDay java.sql.Timestamp gYear java.math.BigInteger gYearMonth java.sql.Timestamp hexBinary java.lang.String ID java.lang.String IDREF java.lang.String IDREFS java.lang.String int java.lang.Integer integer java.math.BigInteger language java.lang.String long java.lang.Long Name java.lang.String NCName java.lang.String negativeInteger java.math.BigInteger NMTOKENS java.lang.String NMTOKENS java.lang.String nonNegativeInteger java.math.BigInteger nonPositiveInteger java.math.BigInteger normalizedString java.lang.String NOTATION java.lang.String object java.lang.Object positiveInteger java.math.BigInteger QName java.lang.String short java.lang.Short string java.lang.String time java.sql.Time timestamp java.sql.Timestamp token java.lang.String unsignedByte java.lang.Short unsignedInt java.lang.Long unsignedLong java.lang.BigInteger unsignedShort java.lang.Integer XMLLiteral java.sql.SQLXML [c] [a] The concrete type is expected to be org.teiid.core.types.BlobType. [b] The concrete type is expected to be org.teiid.core.types.ClobType. [c] The concrete type is expected to be org.teiid.core.types.XMLType.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/datatype_hierarchy_view
Chapter 53. Dashbuilder Runtime and Dashbuilder Standalone
Chapter 53. Dashbuilder Runtime and Dashbuilder Standalone Dashbuilder Runtime and Dashbuilder Standalone are add-ons that you can use to view dashboards created in and exported from Business Central. This is useful for reviewing business metrics in environments that do not have Business Central. Dashbuilder Runtime is available to install on Red Hat JBoss EAP. You can deploy Dashbuilder Standalone on Red Hat OpenShift Container Platform. Navigation between the pages of a dashboard in Dashbuilder Runtime and Dashbuilder Standalone is identical to navigation in the Business Central instance where the dashboard was created. If a page belongs to a group, that group is imported to Dashbuilder Runtime or Dashbuilder Standalone as well as the page. If a page is imported to Dashbuilder Runtime or Dashbuilder Standalone but not used in navigation, then the page is added to the Runtime Dashboards menu group. If no navigation is exported then all pages are added to the Runtime Dashboards menu group. 53.1. Installing Dashbuilder Runtime on Red Hat JBoss EAP To install Dashbuilder Runtime, download the Dashbuilder Runtime WAR and create a user with the admin role. Prerequisites You have a Red Hat JBoss EAP installation. You have created and exported a dashboard in Business Central. For more information about exporting Dashbuilder data, see the "Exporting and importing Dashbuilder data" section in the Configuring Business Central settings and properties guide. Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options: Product: Process Automation Manager Version: 7.13.5 Download Red Hat Process Automation Manager 7.13.5 Add Ons ( rhpam-7.13.5-add-ons.zip ) and extract the ZIP file. Navigate to the directory that contains the extracted files and extract the rhpam-7.13.5-dashbuilder-runtime.zip file. Copy the contents of the dashbuilder-runtime.zip file that you extracted into the <EAP_HOME>/standalone/deployments folder where <EAP_HOME> is the Red Hat JBoss EAP home directory that contains your Red Hat JBoss EAP installation. In the Red Hat JBoss EAP home directory, enter the following command to create a user with the admin role and specify a password. In the following example, replace <USERNAME> and <PASSWORD> with the user name and password of your choice. USD ./bin/jboss-cli.sh --commands="embed-server --std-out=echo,/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity(identity=<USERNAME>),/subsystem=elytron/filesystem-realm=ApplicationRealm:set-password(identity=<USERNAME>, clear={password='<PASSWORD>'}),/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity-attribute(identity=<USERNAME>, name=role, value=['admin'])" In a terminal application, navigate to EAP_HOME /bin . Enter the following command to start Red Hat JBoss EAP: On Linux or UNIX-based systems: USD ./standalone.sh -c standalone-full.xml On Windows: standalone.bat -c standalone-full.xml In a web browser, open the URL http://localhost:8080 . Log in using the credentials of the user that you created for Dashbuilder Runtime. When promted, upload a dashboard that you exported from Business Central. Dashbuilder Runtime uses that dashboard until it is restarted. 53.1.1. Dashbuilder Runtime system properties You can use system properties to customize Dashbuilder Runtime. Dashboards Path When a dashboard is uploaded it is stored in the filesystem. The path where it is stored is controlled by the system property dashbuilder.import.base.dir . The default is /tmp/dashbuilder . The system property is the root path for any dashboard model. For example, if there are multiple files on this path, the file can be imported by accessing Dashbuilder Runtime and passing a query parameter import with the name of the file that should be loaded. For example, if you want to load the sales_dashboard , execute runtime_host?import=sales_dashboard and Dashbuilder Runtime will try to load the file /tmp/dashbuilder/sales_dashboard.zip . Static Dashboard If you want the runtime instance to load a specific dashboard, you can change the system property dashbuilder.runtime.import . Setting the property to a local file path will cause that specific dashboard to be loaded during Runtime startup. Controlling upload size Application servers control POST request size by default. You can control the allowable size of uploaded dashboards by using the system property dashbuilder.runtime.upload.size . The size should be in KB and by default the value is 96kb, meaning that if someone tries to upload a file larger than 96kb then an error will be displayed and the dashboard won't be installed. Default pages in Dashbuilder Runtime Dashboards that are imported in the Dashbuilder Runtime contain a default page. The following list provides a summary of updates of the Dashbuilder Runtime default page: When an imported dashboard contains only one page, then it is used as the default page. If a page is named as index then it is used as the default page. In other cases, the generic home page of the Dashbuilder Runtime is used. Loading external dashboards A Dashboard that is located at an accessible URL can be accessed by Dashbuilder Runtime. You can access the URL by passing the URL with the import query parameter such as runtime_host?import=http://filesHost/sales_dashboard.zip . Note For security reasons this option is disabled by default. You can enable it by setting the system property dashbuilder.runtime.allowExternal as true. 53.2. Deploying Dashbuilder Standalone on Red Hat OpenShift Container Platform You can use Dashbuilder Standalone to view dashboards in OpenShift that were created in and exported from Business Central. This is useful for reviewing business metrics in environments that do not have Business Central. Use the Dashbuilder Standalone operator to deploy Dashbuilder Standalone on Red Hat OpenShift Container Platform separately from other services. Prerequisites Dashbuilder Standalone is available in the OpenShift registry. You have prepared your OpenShift environment as described in Deploying an Red Hat Process Automation Manager environment on Red Hat OpenShift Container Platform 4 using Operators . You have created and exported a dashboard in Business Central. Procedure On the Operator Installation page, enter a name for your application in the Application name field. In the Environment field, enter a name for your environment, for example rhpam-standalone-dashbuilder . Click . Optional: On the Security page, configure LDAP or Red Hat Single Sign-On. On the Components page, select Dashbuilder from the Components list. To add a KIE Server data set, complete the following tasks: Note You can add additional KIE Server data sets by repeating this step. Click Add new KIE Server DataSets . In the DataSet name field, enter kieserver-1 . In the Kie Server Location field, enter the location of your KIE Server, for example https://my-kie-server:80/services/rest/server . To set your credentials, complete one of the following tasks: If you do not have a token set, in the Username and Password fields, enter your username and password. Leave the Token field blank. If you have a token, in the Token field, enter your token. Leave the Username and Password fields blank. The custom resource example: To add a KIE Server template, complete the following tasks: Note You can add additional KIE Server templates by repeating this step. Click Add new KIE Server Templates . In the Template name field, enter a name for your template, for example kieserver-template . In the KIE Server Location field, enter the location of your KIE Server, for example https://my-other-kie-server:80/services/rest/server . To set your credentials, complete one of the following tasks: If you do not have a token set, in the Username and Password fields, enter your username and password. Leave the Token field blank. If you have a token, in the Token field, enter your token. Leave the Username and Password fields blank. Optional: To set a custom hostname for the external route, enter a domain in the Custom hostname to be used on the Dashbuilder external Route field, formatted as in the following example: Note The custom hostname must be valid and resolvable. To change the custom hostname, you can modify the routeHostname property. Optional: To enable and set the Edge termination route, complete the following steps: Under Change route termination , select Enable Edge termination . Optional: In the Key field, enter the private key. Optional: In the Certificate field, enter the certificate. Optional: In the CaCertificate field, enter the CaCertificate. 53.2.1. Dashbuilder Standalone environment variables When you use the Dashbuilder Container Image within operator, you can configure Dashbuilder by using the environment variables or through Custom Resource. Table 53.1. Custom Resource parameters Parameter Equivalent Environment Variable Description Example value allowExternalFileRegister DASHBUILDER_ALLOW_EXTERNAL_FILE_REGISTER Allows downloading of external (remote) files. Default value is false. False componentEnable DASHBUILDER_COMP_ENABLE Enables external components. True componentPartition DASHBUILDER_COMPONENT_PARTITION Enables partitioning of components by the Runtime Model ID. Default value is true. True configMapProps DASHBUILDER_CONFIG_MAP_PROPS Allows the use of the properties file with Dashbuilder configurations. Unique properties are appended and if a property is set more than once, the one from the properties file is used. True dataSetPartition DASHBUILDER_DATASET_PARTITION Enables partitioning of Dataset IDs by the Runtime Model ID. Default value is true. True enableBusinessCentral - Enables integration with Business Central by configuring Business Central and Dashbuilder automatically. Only available on operator. True enableKieServer - Enables integration with KIE Server by configuring KIE Server and Dashbuilder automatically. Only available on operator. True externalCompDir DASHBUILDER_EXTERNAL_COMP_DIR Sets the base directory where dashboard ZIP files are stored. If PersistentConfigs is enabled and ExternalCompDir is not set to an existing path, the /opt/kie/dashbuilder/components directory is used. - importFileLocation DASHBUILDER_IMPORT_FILE_LOCATION Sets a static dashboard to run automatically. If this property is set, imports are not allowed. - importsBaseDir DASHBUILDER_IMPORTS_BASE_DIR Sets the base directory where dashboard ZIP files are stored. If PersistentConfigs is enabled and ImportsBaseDir is not set to an existing path, the /opt/kie/dashbuilder/imports directory is used. If ImportFileLocation is set ImportsBaseDir is ignored. - kieServerDataSets KIESERVER_DATASETS Defines the KIE Server data sets access configuration. - kieServerTemplates KIESERVER_SERVER_TEMPLATES Defines the KIE Server Templates access configuration. - modelFileRemoval DASHBUILDER_MODEL_FILE_REMOVAL Enables automatic removal of model file from the file system. Default value is false. False modelUpdate DASHBUILDER_MODEL_UPDATE Allows Runtime to check model last update in the file system to update the content. Default value is true. True persistentConfigs `` Sets Dashbuilder as not ephemeral. If ImportFileLocation is set PersistentConfigs is ignored. Default value is true. Available only on operator. True runtimeMultipleImport DASHBUILDER_RUNTIME_MULTIPLE_IMPORT Allows Runtime to allow imports (multi-tenancy). Default value is false. False uploadSize DASHBUILDER_UPLOAD_SIZE Sets the size limit for dashboard uploads (in kb). Default value is 10485760 kb. 10485760 env - Represents an environment variable present in a Container. - You can use operator to set environment variables by using the env property. The following example sets the value of the DASHBUILDER_UPLOAD_SIZE property to 1000 .
[ "./bin/jboss-cli.sh --commands=\"embed-server --std-out=echo,/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity(identity=<USERNAME>),/subsystem=elytron/filesystem-realm=ApplicationRealm:set-password(identity=<USERNAME>, clear={password='<PASSWORD>'}),/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity-attribute(identity=<USERNAME>, name=role, value=['admin'])\"", "./standalone.sh -c standalone-full.xml", "standalone.bat -c standalone-full.xml", "apiVersion: app.kiegroup.org/v2 kind: KieApp metadata: name: standalone-dashbuilder spec: environment: rhpam-standalone-dashbuilder objects: dashbuilder: config: kieServerDataSets: - name: kieserver-1 location: 'https://my-kie-server:80/services/rest/server' user: kieserverAdmin password: kieserverAdminPwd replaceQuery: true", "apiVersion: app.kiegroup.org/v2 kind: KieApp metadata: name: standalone-dashbuilder spec: environment: rhpam-standalone-dashbuilder objects: dashbuilder: config: kieServerDataSets: - name: kieserver-1 location: 'https://my-kie-server:80/services/rest/server' user: kieserverAdmin password: kieserverAdminPwd replaceQuery: true kieServerTemplates: - name: kieserver-template location: 'https://my-another-kie-server:80/services/rest/server' user: user password: pwd replaceQuery: true", "`dashbuilder.example.com`", "apiVersion: app.kiegroup.org/v2 kind: KieApp metadata: name: standalone-dashbuilder spec: environment: rhpam-standalone-dashbuilder objects: dashbuilder: env: - name: DASHBUILDER_UPLOAD_SIZE value: '1000'" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/managing_red_hat_process_automation_manager_and_kie_server_settings/dashbuilder-runtimes-con_creating-custom-pages
Chapter 2. Installing Red Hat Gluster Storage
Chapter 2. Installing Red Hat Gluster Storage Red Hat Gluster Storage can be installed in a data center using Red Hat Gluster Storage Server On-Premise. This chapter describes the three different methods for installing Red Hat Gluster Storage Server: using an ISO image, using a PXE server, or using the Red Hat Satellite Server. For information on launching Red Hat Gluster Storage Server for Public Cloud, see the Red Hat Gluster Storage Administration Guide . Warning Gluster-NFS is considered deprecated as of Red Hat Gluster Storage 3.5. Red Hat no longer recommends the use of Gluster-NFS, and does not support its use in new deployments and existing deployments that upgrade to Red Hat Gluster Storage 3.5.3. You can use NFS-Ganesha as an alternative. Important Technology preview packages will also be installed with this installation of Red Hat Gluster Storage Server. For more information about the list of technology preview features, see chapter Technology Previews in the Red Hat Gluster Storage 3.5 Release Notes . When you clone a virtual machine that has Red Hat Gluster Storage Server installed, you need to remove the /var/lib/glusterd/glusterd.info file (if present) before you clone. If you do not remove this file, all cloned machines will have the same UUID. The file will be automatically recreated with a UUID on initial start-up of the glusterd daemon on the cloned virtual machines. 2.1. Obtaining Red Hat Gluster Storage This chapter details the steps to obtain the Red Hat Gluster Storage software. 2.1.1. Obtaining Red Hat Gluster Storage Server for On-Premise Visit the Software & Download Center in the Red Hat Customer Service Portal ( https://access.redhat.com/downloads ) to obtain the Red Hat Gluster Storage Server for On-Premise installation ISO image files . Use a valid Red Hat Subscription to download the full installation files, obtain a free evaluation installation, or follow the links in this page to purchase a new Red Hat Subscription. To download the Red Hat Gluster Storage Server installation files using a Red Hat Subscription or a Red Hat Evaluation Subscription: Visit the Red Hat Customer Service Portal at https://access.redhat.com/login and enter your user name and password to log in. Click Downloads to visit the Software & Download Center . In the Red Hat Gluster Storage Server area, click Download Software to download the latest version of the software. 2.1.2. Obtaining Red Hat Gluster Storage Server for Public Cloud Red Hat Gluster Storage Server for Public Cloud is pre-integrated, pre-verified, and ready to run the Amazon Machine Image (AMI). This AMI provides a fully POSIX-compatible, highly available, scale-out NAS and object storage solution for the Amazon Web Services (AWS) public cloud infrastructure. For more information about obtaining access to AMI, see https://access.redhat.com/knowledge/articles/145693 .
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/installation_guide/chap-installing_red_hat_storage
Chapter 1. Creating a Red Hat High-Availability Cluster with Pacemaker
Chapter 1. Creating a Red Hat High-Availability Cluster with Pacemaker This chapter describes the procedure for creating a Red Hat High Availability two-node cluster using pcs . After you have created a cluster, you can configure the resources and resource groups that you require. Configuring the cluster provided in this chapter requires that your system include the following components: 2 nodes, which will be used to create the cluster. In this example, the nodes used are z1.example.com and z2.example.com . Network switches for the private network, required for communication among the cluster nodes and other cluster hardware such as network power switches and Fibre Channel switches. A power fencing device for each node of the cluster. This example uses two ports of the APC power switch with a host name of zapc.example.com . This chapter is divided into three sections. Section 1.1, "Cluster Software Installation" provides the procedure for installing the cluster software. Section 1.2, "Cluster Creation" provides the procedure for configuring a two-node cluster. Section 1.3, "Fencing Configuration" provides the procedure for configuring fencing devices for each node of the cluster. 1.1. Cluster Software Installation The procedure for installing and configuring a cluster is as follows. On each node in the cluster, install the Red Hat High Availability Add-On software packages along with all available fence agents from the High Availability channel. If you are running the firewalld daemon, execute the following commands to enable the ports that are required by the Red Hat High Availability Add-On. Note You can determine whether the firewalld daemon is installed on your system with the rpm -q firewalld command. If the firewalld daemon is installed, you can determine whether it is running with the firewall-cmd --state command. In order to use pcs to configure the cluster and communicate among the nodes, you must set a password on each node for the user ID hacluster , which is the pcs administration account. It is recommended that the password for user hacluster be the same on each node. Before the cluster can be configured, the pcsd daemon must be started and enabled to boot on startup on each node. This daemon works with the pcs command to manage configuration across the nodes in the cluster. On each node in the cluster, execute the following commands to start the pcsd service and to enable pcsd at system start. Authenticate the pcs user hacluster for each node in the cluster on the node from which you will be running pcs . The following command authenticates user hacluster on z1.example.com for both of the nodes in the example two-node cluster, z1.example.com and z2.example.com .
[ "yum install pcs pacemaker fence-agents-all", "firewall-cmd --permanent --add-service=high-availability firewall-cmd --add-service=high-availability", "passwd hacluster Changing password for user hacluster. New password: Retype new password: passwd: all authentication tokens updated successfully.", "systemctl start pcsd.service systemctl enable pcsd.service", "pcs cluster auth z1.example.com z2.example.com Username: hacluster Password: z1.example.com: Authorized z2.example.com: Authorized" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_administration/ch-startup-HAAA
Chapter 1. Introduction
Chapter 1. Introduction 1.1. About the MTA extension for Microsoft Visual Studio Code You can migrate and modernize applications by using the Migration Toolkit for Applications (MTA) extension for Microsoft Visual Studio Code. The MTA extension analyzes your projects using customizable rulesets, marks issues in the source code, provides guidance to fix the issues, and offers automatic code replacement, if possible. The MTA extension is also compatible with Visual Studio Codespaces, the Microsoft cloud-hosted development environment. 1.2. About the Migration Toolkit for Applications What is the Migration Toolkit for Applications? Migration Toolkit for Applications (MTA) accelerates large-scale application modernization efforts across hybrid cloud environments on Red Hat OpenShift. This solution provides insight throughout the adoption process, at both the portfolio and application levels: inventory, assess, analyze, and manage applications for faster migration to OpenShift via the user interface. In MTA 7.1 and later, when you add an application to the Application Inventory , MTA automatically creates and executes language and technology discovery tasks. Language discovery identifies the programming languages used in the application. Technology discovery identifies technologies, such as Enterprise Java Beans (EJB), Spring, etc. Then, each task assigns appropriate tags to the application, reducing the time and effort you spend manually tagging the application. MTA uses an extensive default questionnaire as the basis for assessing your applications, or you can create your own custom questionnaire, enabling you to estimate the difficulty, time, and other resources needed to prepare an application for containerization. You can use the results of an assessment as the basis for discussions between stakeholders to determine which applications are good candidates for containerization, which require significant work first, and which are not suitable for containerization. MTA analyzes applications by applying one or more rulesets to each application considered to determine which specific lines of that application must be modified before it can be modernized. MTA examines application artifacts, including project source directories and application archives, and then produces an HTML report highlighting areas needing changes. How does the Migration Toolkit for Applications simplify migration? The Migration Toolkit for Applications looks for common resources and known trouble spots when migrating applications. It provides a high-level view of the technologies used by the application. MTA generates a detailed report evaluating a migration or modernization path. This report can help you to estimate the effort required for large-scale projects and to reduce the work involved.
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/visual_studio_code_extension_guide/introduction
Chapter 42. Using ldapmodify to manage IdM users externally
Chapter 42. Using ldapmodify to manage IdM users externally As an IdM administrators you can use the ipa commands to manage your directory content. Alternatively, you can use the ldapmodify command to achieve similar goals. You can use this command interactively and provide all the data directly in the command line. You also can provide data in the file in the LDAP Data Interchange Format (LDIF) to ldapmodify command. 42.1. Templates for managing IdM user accounts externally The following templates can be used for various user management operations in IdM. The templates show which attributes you must modify using ldapmodify to achieve the following goals: Adding a new stage user Modifying a user's attribute Enabling a user Disabling a user Preserving a user The templates are formatted in the LDAP Data Interchange Format (LDIF). LDIF is a standard plain text data interchange format for representing LDAP directory content and update requests. Using the templates, you can configure the LDAP provider of your provisioning system to manage IdM user accounts. For detailed example procedures, see the following sections: Adding an IdM stage user defined in an LDIF file Adding an IdM stage user directly from the CLI using ldapmodify Preserving an IdM user with ldapmodify Templates for adding a new stage user A template for adding a user with UID and GID assigned automatically . The distinguished name (DN) of the created entry must start with uid=user_login : A template for adding a user with UID and GID assigned statically : You are not required to specify any IdM object classes when adding stage users. IdM adds these classes automatically after the users are activated. Templates for modifying existing users Modifying a user's attribute : Disabling a user : Enabling a user : Updating the nssAccountLock attribute has no effect on stage and preserved users. Even though the update operation completes successfully, the attribute value remains nssAccountLock: TRUE . Preserving a user : Note Before modifying a user, obtain the user's distinguished name (DN) by searching using the user's login. In the following example, the user_allowed_to_modify_user_entries user is a user allowed to modify user and group information, for example activator or IdM administrator. The password in the example is this user's password: 42.2. Templates for managing IdM group accounts externally The following templates can be used for various user group management operations in IdM. The templates show which attributes you must modify using ldapmodify to achieve the following aims: Creating a new group Deleting an existing group Adding a member to a group Removing a member from a group The templates are formatted in the LDAP Data Interchange Format (LDIF). LDIF is a standard plain text data interchange format for representing LDAP directory content and update requests. Using the templates, you can configure the LDAP provider of your provisioning system to manage IdM group accounts. Creating a new group Modifying groups Deleting an existing group : Adding a member to a group : Do not add stage or preserved users to groups. Even though the update operation completes successfully, the users will not be updated as members of the group. Only active users can belong to groups. Removing a member from a group : Note Before modifying a group, obtain the group's distinguished name (DN) by searching using the group's name. 42.3. Using ldapmodify command interactively You can modify Lightweight Directory Access Protocol (LDAP) entries in the interactive mode. Procedure In a command line, enter the LDAP Data Interchange Format (LDIF) statement after the ldapmodify command. Example 42.1. Changing the telephone number for a testuser Note that you need to obtain a Kerberos ticket for using -Y option. Press Ctlr+D to exit the interactive mode. Alternatively, provide an LDIF file after ldapmodify command: Example 42.2. The ldapmodify command reads modification data from an LDIF file Additional resources For more information about how to use the ldapmodify command see ldapmodify(1) man page on your system. For more information about the LDIF structure, see ldif(5) man page on your system. 42.4. Preserving an IdM user with ldapmodify Follow this procedure to use ldapmodify to preserve an IdM user; that is, how to deactivate a user account after the employee has left the company. Prerequisites You can authenticate as an IdM user with a role to preserve users. Procedure Log in as an IdM user with a role to preserve users: Enter the ldapmodify command and specify the Generic Security Services API (GSSAPI) as the Simple Authentication and Security Layer (SASL) mechanism to be used for authentication: Enter the dn of the user you want to preserve: Enter modrdn as the type of change you want to perform: Specify the newrdn for the user: Indicate that you want to preserve the user: Specify the new superior DN : Preserving a user moves the entry to a new location in the directory information tree (DIT). For this reason, you must specify the DN of the new parent entry as the new superior DN. Press Enter again to confirm that this is the end of the entry: Exit the connection using Ctrl + C . Verification Verify that the user has been preserved by listing all preserved users:
[ "dn: uid=user_login ,cn=staged users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com changetype: add objectClass: top objectClass: inetorgperson uid: user_login sn: surname givenName: first_name cn: full_name", "dn: uid=user_login,cn=staged users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com changetype: add objectClass: top objectClass: person objectClass: inetorgperson objectClass: organizationalperson objectClass: posixaccount uid: user_login uidNumber: UID_number gidNumber: GID_number sn: surname givenName: first_name cn: full_name homeDirectory: /home/user_login", "dn: distinguished_name changetype: modify replace: attribute_to_modify attribute_to_modify: new_value", "dn: distinguished_name changetype: modify replace: nsAccountLock nsAccountLock: TRUE", "dn: distinguished_name changetype: modify replace: nsAccountLock nsAccountLock: FALSE", "dn: distinguished_name changetype: modrdn newrdn: uid=user_login deleteoldrdn: 0 newsuperior: cn=deleted users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com", "ldapsearch -LLL -x -D \"uid= user_allowed_to_modify_user_entries ,cn=users,cn=accounts,dc=idm,dc=example,dc=com\" -w \"Secret123\" -H ldap://r8server.idm.example.com -b \"cn=users,cn=accounts,dc=idm,dc=example,dc=com\" uid=test_user dn: uid=test_user,cn=users,cn=accounts,dc=idm,dc=example,dc=com memberOf: cn=ipausers,cn=groups,cn=accounts,dc=idm,dc=example,dc=com", "dn: cn=group_name,cn=groups,cn=accounts,dc=idm,dc=example,dc=com changetype: add objectClass: top objectClass: ipaobject objectClass: ipausergroup objectClass: groupofnames objectClass: nestedgroup objectClass: posixgroup uid: group_name cn: group_name gidNumber: GID_number", "dn: group_distinguished_name changetype: delete", "dn: group_distinguished_name changetype: modify add: member member: uid=user_login,cn=users,cn=accounts,dc=idm,dc=example,dc=com", "dn: distinguished_name changetype: modify delete: member member: uid=user_login,cn=users,cn=accounts,dc=idm,dc=example,dc=com", "ldapsearch -YGSSAPI -H ldap://server.idm.example.com -b \"cn=groups,cn=accounts,dc=idm,dc=example,dc=com\" \"cn=group_name\" dn: cn=group_name,cn=groups,cn=accounts,dc=idm,dc=example,dc=com ipaNTSecurityIdentifier: S-1-5-21-1650388524-2605035987-2578146103-11017 cn: testgroup objectClass: top objectClass: groupofnames objectClass: nestedgroup objectClass: ipausergroup objectClass: ipaobject objectClass: posixgroup objectClass: ipantgroupattrs ipaUniqueID: 569bf864-9d45-11ea-bea3-525400f6f085 gidNumber: 1997010017", "ldapmodify -Y GSSAPI -H ldap://server.example.com dn: uid=testuser,cn=users,cn=accounts,dc=example,dc=com changetype: modify replace: telephoneNumber telephonenumber: 88888888", "ldapmodify -Y GSSAPI -H ldap://server.example.com -f ~/example.ldif", "kinit admin", "ldapmodify -Y GSSAPI SASL/GSSAPI authentication started SASL username: [email protected] SASL SSF: 256 SASL data security layer installed.", "dn: uid=user1,cn=users,cn=accounts,dc=idm,dc=example,dc=com", "changetype: modrdn", "newrdn: uid=user1", "deleteoldrdn: 0", "newsuperior: cn=deleted users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com", "[Enter] modifying rdn of entry \"uid=user1,cn=users,cn=accounts,dc=idm,dc=example,dc=com\"", "ipa user-find --preserved=true -------------- 1 user matched -------------- User login: user1 First name: First 1 Last name: Last 1 Home directory: /home/user1 Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 1997010003 GID: 1997010003 Account disabled: True Preserved user: True ---------------------------- Number of entries returned 1 ----------------------------" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/using-ldapmodify-to-manage-IdM-users-externally_configuring-and-managing-idm
function::task_pid
function::task_pid Name function::task_pid - The process identifier of the task. Synopsis Arguments task task_struct pointer. General Syntax task_pid:long (task:long) Description This fucntion returns the process id of the given task.
[ "function task_pid:long(task:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-task-pid
Chapter 4. Creating an OpenShift route to access a Kafka cluster
Chapter 4. Creating an OpenShift route to access a Kafka cluster Create an OpenShift route to access a Kafka cluster outside of OpenShift. This procedure describes how to expose a Kafka cluster to clients outside the OpenShift environment. After the Kafka cluster is exposed, external clients can produce and consume messages from the Kafka cluster. To create an OpenShift route, a route listener is added to the configuration of a Kafka cluster installed on OpenShift. Warning An OpenShift Route address includes the name of the Kafka cluster, the name of the listener, and the name of the namespace it is created in. For example, my-cluster-kafka-listener1-bootstrap-amq-streams-kafka ( <cluster_name> -kafka- <listener_name> -bootstrap- <namespace> ). Be careful that the whole length of the address does not exceed a maximum limit of 63 characters. Prerequisites You have created a Kafka cluster on OpenShift . You need the OpenJDK keytool to manage certificates. (Optional) You can perform some of the steps using the OpenShift oc CLI tool. Procedure Navigate in the web console to the Operators > Installed Operators page and select Red Hat Integration - AMQ Streams to display the operator details. Select the Kafka page to show the installed Kafka clusters. Click the name of the Kafka cluster you are configuring to view its details. We use a Kafka cluster named my-cluster in this example. Select the YAML page for the Kafka cluster my-cluster . Add route listener configuration to create an OpenShift route named listener1 . The listener configuration must be set to the route type. You add the listener configuration under listeners in the Kafka configuration. External route listener configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: amq-streams-kafka spec: kafka: # ... listeners: # ... - name: listener1 port: 9094 type: route tls: true # ... The client connects on port 443, the default router port, but traffic is then routed to the port you configure, which is 9094 in this example. Save the updated configuration. Select the Resources page for the Kafka cluster my-cluster to locate the connection information you will need for your client. From the Resources page, you'll find details for the route listener and the public cluster certificate you need to connect to the Kafka cluster. Click the name of the my-cluster-kafka-listener1-bootstrap route created for the Kafka cluster to show the route details. Make a note of the hostname. The hostname is specified with port 443 in a Kafka client as the bootstrap address for connecting to the Kafka cluster. You can also locate the bootstrap address by navigating to Networking > Routes and selecting the amq-streams-kafka project to display the routes created in the namespace. Or you can use the oc tool to extract the bootstrap details. Extracting bootstrap information oc get routes my-cluster-kafka-listener1-bootstrap -o=jsonpath='{.status.ingress[0].host}{"\n"}' Navigate back to the Resources page and click the name of the my-cluster-cluster-ca-cert to show the secret details for accessing the Kafka cluster. The ca.crt certificate file contains the public certificate of the Kafka cluster. You will need the certificate to access the Kafka broker. Make a local copy of the ca.crt public certificate file. You can copy the details of the certificate or use the OpenShift oc tool to extract them. Extracting the public certificate oc extract secret/my-cluster-cluster-ca-cert --keys=ca.crt --to=- > ca.crt Create a local truststore for the public cluster certificate using keytool . Creating a local truststore keytool -keystore client.truststore.jks -alias CARoot -import -file ca.crt When prompted, create a password for accessing the truststore. The truststore is specified in a Kafka client for authenticating access to the Kafka cluster. You are now ready to start sending and receiving messages.
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: amq-streams-kafka spec: kafka: # listeners: # - name: listener1 port: 9094 type: route tls: true", "get routes my-cluster-kafka-listener1-bootstrap -o=jsonpath='{.status.ingress[0].host}{\"\\n\"}'", "extract secret/my-cluster-cluster-ca-cert --keys=ca.crt --to=- > ca.crt", "keytool -keystore client.truststore.jks -alias CARoot -import -file ca.crt" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/getting_started_with_amq_streams_on_openshift/proc-creating-route-str
Appendix A. The Virtual Host Metrics Daemon (vhostmd)
Appendix A. The Virtual Host Metrics Daemon (vhostmd) vhostmd (the Virtual Host Metrics Daemon) allows virtual machines to see limited information about the host they are running on. This daemon is only supplied with Red Hat Enterprise Linux for SAP. In the host, a daemon ( vhostmd ) runs which writes metrics periodically into a disk image. This disk image is exported read-only to guest virtual machines. Guest virtual machines can read the disk image to see metrics. Simple synchronization stops guest virtual machines from seeing out of date or corrupt metrics. The system administrator chooses which metrics are available for use on a per guest virtual machine basis. In addition, the system administrator may block one or more guest virtual machines from having any access to metric configurations. Customers who want to use vhostmd and vm-dump-metrics therefore need subscriptions for "RHEL for SAP Business Applications" to be able to subscribe their RHEL systems running SAP to the "RHEL for SAP" channel on the Customer Portal or Red Hat Subscription Management to install the packages. The following kbase article in the customer portal describes the setup of vhostmd on RHEL: https://access.redhat.com/knowledge/solutions/41566
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/vhostmd
Chapter 2. Managing repositories
Chapter 2. Managing repositories 2.1. Available repositories Certified Cloud and Service Provider (CCSP) partners control what repositories and packages are delivered through their service. For the most current information regarding what repositories are available for the various operating system versions but are not yet added in your RHUI, run the following command on the RHUA: Additional resources Red Hat Ecosystem Catalog 2.2. Adding a new Red Hat content repository Your CCSP account enables you to access selected Red Hat repositories and make them available in your Red Hat Update Infrastructure environment. Procedure Navigate to the Red Hat Update Infrastructure Management Tool home screen: Press r to select manage repositories . From the Repository Management screen, press a to select add a new Red Hat content repository . Wait for the Red Hat Update Infrastructure Management Tool to determine the entitled repositories. This might take several minutes: The Red Hat Update Infrastructure Management Tool prompts for a selection method: To add several repositories bundled together as a product-usually all the minor versions of it in one step-press 2 to select the By Product method. Alternatively, you can add particular repositories by using the By Repository method. Select which repositories to add by typing the number of the repository at the prompt. You can also choose the range of repositories, for instance, by entering 1 - 5 . Continue until all repositories you want to add are checked. Press c when you are finished selecting the repositories. The Red Hat Update Infrastructure Management Tool displays the repositories for deployment and prompts for confirmation: Press y to proceed. A message indicates each successful deployment: Verification From the Repository Management screen, press l to check that the correct repositories have been installed. 2.3. Listing repositories currently managed by RHUI 4 A repository contains downloadable software for a Linux distribution. You use yum to search for, install, or only download RPMs from the repository. Procedure Navigate to the Red Hat Update Infrastructure Management Tool home screen: Press r to select manage repositories . From the Repository Management screen, press l to select list repositories currently managed by the RHUI : 2.4. Displaying detailed information on a repository You can use the Repository Management screen to display information about a particular repository. Procedure Navigate to the Red Hat Update Infrastructure Management Tool home screen: Press r to select manage repositories . From the Repository Management screen, press i : Select the repository by entering the value beside the repository name. Enter one repository selection at a time before confirming your product selection. Press c to confirm: Verification A similar output displays for your selections. 2.5. Generating a repository status file You can generate a machine-readable JSON file that displays the status of all RHUI repositories as well as provides some additional information. This is useful, for example, if you want to passively monitor the status of the repositories. 2.5.1. Generating a status file for RHUI repositories You can use the rhui-manager command to obtain the status of each repository in a machine-readable format. Procedure On the RHUA node, run the following command. A JSON file is generated containing a list of dictionaries for all custom and Red Hat repositories. For a list of available dictionaries, see Section 2.5.2, "List of dictionary keys in the repository status JSON file" . 2.5.2. List of dictionary keys in the repository status JSON file A machine-readable JSON file is created when you run the command to get the status of each RHUI repository. The JSON file contains a list of dictionaries with one dictionary for each repository. List of dictionary keys for custom repositories Table 2.1. List of dictionary keys for custom repositories Key Description base_path The path of the repository. description The name of the repository. group The group the repository belongs to. It is always set to the string, custom . id The repository ID. name The name of the repository. It is the same as the repository ID. List of dictionary keys for Red Hat repositories Table 2.2. List of dictionary keys for Red Hat repositories Key Description base_path The path of the repository. description The name of the repository. group The group the repository belongs to. It is always set to the string, redhat . id The repository ID. last_sync_date The date and time the repository was last synchronized. The value is null if the repository was never synchronized. last_sync_exception The exception raised if the repository failed to synchronize. The value is null if the repository was synchronized correctly. last_sync_result The result of the synchronization task. The values are: completed : If the repository synchronized correctly. null : If the repository was never synchronized. failed : If the synchronization failed. running : If a synchronization task is currently running. last_sync_traceback The traceback that was logged if the repository failed to synchronize. The value is null if the repository was synchronized correctly or was never synchronized. metadata_available A boolean value denoting whether metadata is available for the repository. name The name of the repository. It is the same as the repository ID. next_sync_date The date and time of the scheduled synchronization of the repository. If a synchronization task is currently running, the value is running . repo_published A boolean value denoting whether this repository has been published in RHUI. Note that, by default, RHUI is configured to automatically publish repositories. 2.6. Setting Up On-Demand Syncing of Repositories RHUI allows you to minimize the amount of content downloaded to storage in advance by setting certain repositories to on_demand sync mode. This way, RHUI will only download and store content when it is requested by client machines, which can result in reduced storage usage and lower costs. However, the downside of this approach is that RHUI's performance will depend on the connection speed to the Red Hat CDN network. Repository Content Types There are three types of repository content: Binary RPM repositories Source RPM repositories Debug RPM repositories Synchronization Strategies You can set each of these repository types to one of two synchronization policies: immediate on_demand By default, all policies are set to immediate . Setting the Sync Policy By default, the /etc/rhui/rhui-tools.conf file on the RHUA node contains the following lines in the [rhui] section: The default_sync_policy option applies to all three types of content repositories. Although you can change the policy by editing this file, keep in mind that your changes will be lost when you rerun the installer for any reason. Therefore, configure the sync policies in the custom configuration file instead. The custom configuration file is located at /root/.rhui/rhui-tools-custom.conf but does not exist by default. To use this file, create it and put the [rhui] section there. Then you can add specific overrides to this section to customize the behavior for particular content types. The options available are: rpm_sync_policy source_sync_policy debug_sync_policy Examples The most common usage of the on_demand policy is to set Binary RPMs to sync immediately while setting Source and Debug repositories to on_demand, as the general population of clients usually does not require these content types. You can configure this in several ways: or or All three configurations are valid; it is simply a matter of preference. Applying the Policy After updating the configuration file, the repository synchronization will apply the new policy. If you switch from on_demand to immediate, the sync will begin downloading all content for the specified type. If you switch from immediate to on_demand, the sync will only download repository metadata. RHUI will then download content as requested by client machines. Tips and Tricks Setting all repositories to on_demand right after installing RHUI can lead to faster deployment and quicker delivery for end-users, as only metadata needs to be initially synced. Utilizing a "martyr client" strategy can be beneficial if you have a new installation and do not need to support older versions of RHEL clients. By using a client that mirrors end-user configurations and running dnf update , you can pre-download content to RHUI's storage. 2.7. Adding a new Red Hat content repository using an input file In Red Hat Update Infrastructure 4.2 and later, you can add custom repositories using a configured YAML input file. You can find an example template of the YAML file on the RHUA node in the /usr/share/rhui-tools/examples/repo_add_by_file.yaml directory. This functionality is only available in the command-line interface (CLI). Prerequisites Ensure that you have root access to the RHUA node. Procedure On the RHUA node, create a YAML input file in the following format: Add the repositories listed in the input file using the rhui-manager utility: Verification In the CLI, use the following command to list all the installed repositories and check whether the correct repositories have been installed: In the RHUI Management Tool, on the Repository Management screen, press l to list all the installed repositories and check whether the correct repositories have been installed. 2.8. Creating a new custom repository (RPM content only) You can create custom repositories that can be used to distribute updated client configuration packages or other non-Red Hat software to the RHUI clients. A protected repository for 64-bit RHUI servers (for example, client-rhui-x86_64 ) will be the preferred vehicle for distributing new non-Red Hat packages, such as an updated client configuration package, to the RHUI clients. Like Red Hat content repositories, all of which are protected, protected custom repositories that differ only in processor architecture ( i386 versus AMD64 ) are consolidated into a single entitlement within an entitlement certificate, using the USDbasearch yum variable. In the event of certificate problems, an unprotected repository for RHUI servers can be used as a fallback method for distributing updated RPMs to the RHUI clients. Procedure Navigate to the Red Hat Update Infrastructure Management Tool home screen: Press r to select manage repositories . From the Repository Management screen, press c to select create a new custom repository (RPM content only) . Enter a unique ID for the repository. Only alphanumeric characters, _ (underscore), and - (hyphen) are permitted. You cannot use spaces in the unique ID. For example, repo1 , repo_1 , and repo-1 are valid entries. Enter a display name for the repository. This name can contain spaces and other characters that could not be used in the ID. The name defaults to the ID. Specify the path that will host the repository. The path must be unique across all repositories hosted by RHUI. For example, if you specify the path at this step as internal/rhel/9/repo_1 , then the repository will be located at: https://<yourLB>/pulp/content/protected/internal/rhel/9/repo_1 . Choose whether to protect the new repository. If you answer no to this question, any client can access the repository. If you answer yes, only clients with an appropriate entitlement certificate can access the repository. Warning As the name implies, the content in an unprotected repository is available to any system that requests it, without any need for a client entitlement certificate. Be careful when using an unprotected repository to distribute any content, particularly content such as updated client configuration RPMs, which will then provide access to protected repositories. Answer yes or no to the following questions as they appear: The details of the new repository displays. Press y at the prompt to confirm the information and create the repository. Verification From the Repository Management screen, press l to check that the correct repositories have been installed. 2.9. Deleting a repository from RHUI 4 When the Red Hat Update Infrastructure Management Tool deletes a Red Hat repository, it deletes the repository from the RHUA and all applicable CDS nodes. Procedure Navigate to the Red Hat Update Infrastructure Management Tool home screen: Press r to select manage repositories . From the Repository Management screen, press d at the prompt to delete a Red Hat repository. A list of all repositories currently being managed by RHUI displays. Select which repositories to delete by typing the number of the repository at the prompt. Typing the number of a repository places a checkmark to the name of that repository. You can also choose the range of repositories, for instance, by entering 1 - 5 . Continue until all repositories you want to delete are checked. Press c at the prompt to confirm. Note After you delete the repositories, the client configuration RPMs that refer to the deleted repositories will not be available to be used by yum . 2.10. Uploading content to a custom repository (RPM content only) You can upload multiple packages and upload to more than one repository at a time. Packages are uploaded to the RHUA immediately but are not available on the CDS node until the time the CDS node synchronizes. Procedure Navigate to the Red Hat Update Infrastructure Management Tool home screen: Press r to select manage repositories . From the Repository Management screen, press u : Enter the value (1-1) to toggle the selection. Press c to confirm your selection. Enter the location of the packages to upload. If the location is an RPM, the file will be uploaded. If the location is a directory, all RPMs in that directory will be uploaded: Press y to proceed or n to cancel: Verification See Section 2.14, "Listing the packages in a repository (RPM content only)" 2.11. Uploading content from a remote web site (RPM content only) You can upload packages that are stored on a remote server without having to manually download them first. The packages must be accessible by HTTP, HTTPS, or FTP. Procedure Navigate to the Red Hat Update Infrastructure Management Tool home screen: Press r to select manage repositories . From the Repository Management screen, press ur : Enter the value (1-1) to toggle the selection. Press c to confirm your selection: Enter the remote URL of the packages to upload. If the location is an RPM, the file will be uploaded. If the location is a web page, all RPMs linked off that page will be uploaded: Press y to proceed or n to cancel: Verification See Section 2.14, "Listing the packages in a repository (RPM content only)" 2.12. Importing package group metadata to a custom repository To allow RHUI users to view and install package groups or language packs from a custom repository, you can import a comps.xml or a comps.xml.gz file to the custom repository. Note Red Hat repositories contain these files provided by Red Hat. You can not override them. You can only upload these files to your custom repositories. This functionality is only available in the command-line interface. Prerequisites Ensure that you have a valid comps.xml or comps.xml.gz file relevant to the custom repository. Ensure you have root access to the RHUA node. Procedure On the RHUA node, import data from a comps file to your custom repository using the rhui-manager utility: Verification On a client system that uses the custom repository: Refresh the repository data: List the repository data and verify that the comps file has been updated: 2.13. Removing content from a custom repository (Custom RPM content only) You can remove packages from custom repositories using RHUI's Text User Interface (TUI). For the command-line interface (CLI) command, see Section 10.1, "Using RHUI 4 CLI options" . Procedure Navigate to the Red Hat Update Infrastructure Management Tool home screen: Enter r to select manage repositories . On the Repository Management screen, enter r to select packages to remove from a repository (Custom RPM content only) : Enter the value to select the repository: Enter the value to select the packages to delete. Enter c to confirm selection. Enter y to proceed or n to cancel: 2.14. Listing the packages in a repository (RPM content only) When listing repositories within the Red Hat Update Infrastructure Management Tool, only repositories that contain fewer than 100 packages display their contents. Results with more than 100 packages only display a package count. Procedure Navigate to the Red Hat Update Infrastructure Management Tool home screen: Press r to select manage repositories . From the Repository Management screen, press p . Select the number of the repository you want to view. The Red Hat Update Infrastructure Management Tool asks if you want to filter the results. Leave the line blank to see the results without a filter. Verification One of three types of messages displays: 2.15. Limiting the number of repository versions In Pulp 3, which is used in Red Hat Update Infrastructure 4, repositories are versioned. When a repository is updated in Red Hat CDN and synchronized in Red Hat Update Infrastructure, Pulp creates a new version. By default, repositories added using Red Hat Update Infrastructure version 4.6 and earlier were configured to retain all repository versions. This resulted in data accumulating in the database indefinitely, taking up disk space and, in the worst case, the inability to delete a repository. With version 4.7 and newer, repositories are added with a version limit of 5. This means only the latest five versions are kept at all times, and any older version is automatically deleted. However, you may want to set the version limit for existing repositories added earlier and have any older versions deleted. You can do this for all your repositories at once or process one repository at a time. The command to do this is as follows: For example, to limit the number of versions for all repositories to 5, one would run: Depending on the number of repositories and existing repository versions, It can take more than an hour for all the necessary tasks to be scheduled, and up to a few days for the versions older than the limit to be deleted. You can watch the progress in the rhui-manager text user interface, on the synchronization screen, under running tasks. 2.16. Removing orphaned artifacts RPM packages, repodata files, and other relates files are kept on the disk even if they are no longer part of a repository; for example, after a repository is deleted and the files do not belong to another repository, or when an update is made available and a new set of repodata files is synchronized. To remove this obsolete content, one can run the following command: Depending on the number of files, it can take up to several days for this task to complete. You can watch the progress in the rhui-manager text user interface, on the synchronization screen, under running tasks.
[ "rhui-manager --noninteractive repo unused --by_repo_id", "rhui-manager", "rhui (repo) => a Loading latest entitled products from Red Hat ... listings loaded Determining undeployed products ... product list calculated", "Import Repositories: 1 - All in Certificate 2 - By Product 3 - By Repository Enter value (1-3) or 'b' to abort:", "Enter value (1-620) to toggle selection, 'c' to confirm selections, or '?' for more commands:", "The following products will be deployed: Red Hat Enterprise Linux 8 for ARM 64 - BaseOS (Debug RPMs) from RHUI Red Hat Enterprise Linux 8 for ARM 64 - BaseOS (RPMs) from RHUI Proceed? (y/n)", "Importing Red Hat Enterprise Linux 8 for ARM 64 - BaseOS (Debug RPMs) from RHUI Importing product repository Red Hat Enterprise Linux 8 for ARM 64 - BaseOS (Debug RPMs) from RHUI (8.4) Importing product repository Red Hat Enterprise Linux 8 for ARM 64 - BaseOS (Debug RPMs) from RHUI (8.3) Importing product repository Red Hat Enterprise Linux 8 for ARM 64 - BaseOS (Debug RPMs) from RHUI (8.2) Importing product repository Red Hat Enterprise Linux 8 for ARM 64 - BaseOS (Debug RPMs) from RHUI (8.1) Importing product repository Red Hat Enterprise Linux 8 for ARM 64 - BaseOS (Debug RPMs) from RHUI (8.0) Importing product repository Red Hat Enterprise Linux 8 for ARM 64 - BaseOS (Debug RPMs) from RHUI (8) Importing Red Hat Enterprise Linux 8 for ARM 64 - BaseOS (RPMs) from RHUI Importing product repository Red Hat Enterprise Linux 8 for ARM 64 - BaseOS (RPMs) from RHUI (8.4) Importing product repository Red Hat Enterprise Linux 8 for ARM 64 - BaseOS (RPMs) from RHUI (8.3) Importing product repository Red Hat Enterprise Linux 8 for ARM 64 - BaseOS (RPMs) from RHUI (8.2) Importing product repository Red Hat Enterprise Linux 8 for ARM 64 - BaseOS (RPMs) from RHUI (8.1) Importing product repository Red Hat Enterprise Linux 8 for ARM 64 - BaseOS (RPMs) from RHUI (8.0) Importing product repository Red Hat Enterprise Linux 8 for ARM 64 - BaseOS (RPMs) from RHUI (8) Content will not be downloaded to the newly imported repositories until the next sync is run.", "rhui-manager", "Red Hat Enterprise Linux 8 for ARM 64 - AppStream (RPMs) from RHUI (8) Red Hat Enterprise Linux 8 for ARM 64 - AppStream (RPMs) from RHUI (8.0) Red Hat Enterprise Linux 8 for ARM 64 - AppStream (RPMs) from RHUI (8.1) Red Hat Enterprise Linux 8 for ARM 64 - AppStream (RPMs) from RHUI (8.2) Red Hat Enterprise Linux 8 for ARM 64 - AppStream (RPMs) from RHUI (8.3) Red Hat Enterprise Linux 8 for ARM 64 - AppStream (RPMs) from RHUI (8.4) Red Hat Enterprise Linux 8 for ARM 64 - AppStream (Source RPMs) from RHUI (8) Red Hat Enterprise Linux 8 for ARM 64 - AppStream (Source RPMs) from RHUI (8.0) Red Hat Enterprise Linux 8 for ARM 64 - AppStream (Source RPMs) from RHUI (8.1) Red Hat Enterprise Linux 8 for ARM 64 - AppStream (Source RPMs) from RHUI (8.2) Red Hat Enterprise Linux 8 for ARM 64 - AppStream (Source RPMs) from RHUI (8.3) Red Hat Enterprise Linux 8 for ARM 64 - AppStream (Source RPMs) from RHUI (8.4) Red Hat Enterprise Linux 8 for ARM 64 - BaseOS (Debug RPMs) from RHUI (8) Red Hat Enterprise Linux 8 for ARM 64 - BaseOS (Debug RPMs) from RHUI (8.0) Red Hat Enterprise Linux 8 for ARM 64 - BaseOS (Debug RPMs) from RHUI (8.1) Red Hat Enterprise Linux 8 for ARM 64 - BaseOS (Debug RPMs) from RHUI (8.2) Red Hat Enterprise Linux 8 for ARM 64 - BaseOS (Debug RPMs) from RHUI (8.3) Red Hat Enterprise Linux 8 for ARM 64 - BaseOS (Debug RPMs) from RHUI (8.4)", "rhui-manager", "Enter value (1-1631) to toggle selection, 'c' to confirm selections, or '?' for more commands:", "Name: Red Hat Enterprise Linux 8 for ARM 64 - AppStream (Debug RPMs) from RHUI (8.4) ID: rhel-8-for-aarch64-appstream-debug-rhui-rpms-8.4 Type: Red Hat Version: 0 Relative Path: content/dist/rhel8/rhui/8.4/aarch64/appstream/debug GPG Check: Yes Custom GPG Keys: (None) Red Hat GPG Key: Yes Content Unit Count: Last Sync: 2021-11-15 15:56:06 Next Sync: 2021-11-15 22:00:00 Name: Red Hat Enterprise Linux 8 for ARM 64 - AppStream (RPMs) from RHUI (8.4) ID: rhel-8-for-aarch64-appstream-rhui-rpms-8.4 Type: Red Hat Version: 0 Relative Path: content/dist/rhel8/rhui/8.4/aarch64/appstream/os GPG Check: Yes Custom GPG Keys: (None) Red Hat GPG Key: Yes Content Unit Count: Last Sync: 2021-11-15 19:50:20 Next Sync: 2021-11-16 01:55:00 Name: Red Hat Enterprise Linux 8 for ARM 64 - AppStream (Source RPMs) from RHUI (8.4) ID: rhel-8-for-aarch64-appstream-source-rhui-rpms-8.4 Type: Red Hat Version: 0 Relative Path: content/dist/rhel8/rhui/8.4/aarch64/appstream/source/SRPMS GPG Check: Yes Custom GPG Keys: (None) Red Hat GPG Key: Yes Content Unit Count: Last Sync: 2021-11-15 15:56:51 Next Sync: 2021-11-15 22:00:00", "rhui-manager --non-interactive status --repo_json <output file>", "Sync policy can be immediate or on_demand default_sync_policy: immediate", "[rhui] default_sync_policy: on_demand rpm_sync_policy: immediate", "[rhui] default_sync_policy: immediate source_sync_policy: on_demand debug_sync_policy: on_demand", "[rhui] default_sync_policy: immediate rpm_sync_policy: immediate source_sync_policy: on_demand debug_sync_policy: on_demand", "cat /root/example.yaml name: Example_YAML_File repo_ids: - rhel-8-for-x86_64-baseos-eus-rhui-rpms-8.1 - rhel-8-for-x86_64-baseos-eus-rhui-rpms-8.2 - rhel-8-for-x86_64-baseos-eus-rhui-rpms-8.4 - rhel-8-for-x86_64-baseos-eus-rhui-rpms-8.6", "rhui-manager repo add_by_file --file /root/example.yaml --sync_now The name of the repos being added: Example_YAML_File Loading latest entitled products from Red Hat ... listings loaded Successfully added Red Hat Enterprise Linux 8 for x86_64 - BaseOS - Extended Update Support from RHUI (RPMs) (8.1) (Yum) Successfully added Red Hat Enterprise Linux 8 for x86_64 - BaseOS - Extended Update Support from RHUI (RPMs) (8.2) (Yum) Successfully added Red Hat Enterprise Linux 8 for x86_64 - BaseOS - Extended Update Support from RHUI (RPMs) (8.4) (Yum) Successfully added Red Hat Enterprise Linux 8 for x86_64 - BaseOS - Extended Update Support from RHUI (RPMs) (8.6) (Yum) ... successfully scheduled for the next available timeslot. ... successfully scheduled for the next available timeslot. ... successfully scheduled for the next available timeslot. ... successfully scheduled for the next available timeslot.", "rhui-manager repo list", "rhui-manager", "Unique ID for the custom repository (alphanumerics, _, and - only):", "Display name for the custom repository [repo_1]:", "Unique path at which the repository will be served [repo_1]:", "Should the repository require clients to perform a GPG check and verify packages are signed by a GPG key? (y/n) Will the repository be used to host any Red Hat GPG signed content? (y/n) Will the repository be used to host any custom GPG signed content? (y/n) Enter the absolute path to the public key of the GPG key pair: Would you like to enter another public key? (y/n) Enter the absolute path to the public key of the GPG key pair: Would you like to enter another public key? (y/n)", "rhui-manager", "rhui-manager", "Select the repositories to upload the package into: - 1: test", "/root/bear-4.1-1.noarch.rpm The following RPMs will be uploaded: bear-4.1-1.noarch.rpm", "Copying RPMs to a temporary directory: /tmp/rhui.rpmupload.jsqdub22.tmp .. 1 RPMs copied. Creating repository metadata for 1 packages .. repository metadata created for 1 packages. The packages upload task for repo: client-config-rhel-8-x86_64 has been queued: /pulp/api/v3/tasks/01937826-8654-77c1-84f7-e9e07c7a7aeb/ You can inspect its progress via (S)ync screen/(RR) menu option in rhui-manager TUI.", "rhui-manager", "Select the repositories to upload the package into: - 1: test", "### WARNING ### WARNING ### WARNING ### WARNING ### WARNING ### WARNING ### # Content retrieved from non-Red Hat arbitrary places can contain # unsupported or malicious software. Proceed at your own risk. # # ###########################################################################", "https://repos.fedorapeople.org/pulp/pulp/demo_repos/zoo/bear-4.1-1.noarch.rpm Retrieving https://repos.fedorapeople.org/pulp/pulp/demo_repos/zoo/bear-4.1-1.noarch.rpm The following RPMs will be uploaded: bear-4.1-1.noarch.rpm", "Copying RPMs to a temporary directory: /tmp/rhui.rpmupload.dwux8rq7.tmp .. 1 RPMs copied. Creating repository metadata for 1 packages .. repository metadata created for 1 packages. The packages upload task for repo: test has been queued: /pulp/api/v3/tasks/0193770c-6523-7363-ae5e-8c6429728b4f/ You can inspect its progress via (S)ync screen/(RR) menu option in rhui-manager TUI.", "rhui-manager repo add_comps --repo_id Example_Custom_Repo --comps /root/Example-Comps.xml", "yum clean metadata", "yum grouplist", "rhui-manager", "-= Repository Management =- l list repositories currently managed by the RHUI i display detailed information on a repository a add a new Red Hat content repository ac add a new Red Hat container c create a new custom repository (RPM content only) d delete a repository from the RHUI u upload content to a custom repository (RPM content only) ur upload content from a remote web site (RPM content only) p list packages in a repository (RPM content only) r select packages to remove from a repository (Custom RPM content only)", "Choose a repository to delete packages from: 1 - Test-RPM-1 2 - Test-RPM-2", "Select the packages to remove: - 1: example-package-1.noarch.rpm - 2: example-package-2.noarch.rpm", "The following packages will be removed: example-package-1.noarch.rpm", "Removed example-package-1.noarch.rpm", "rhui-manager", "Enter value (1-1631) or 'b' to abort: 1 Enter the first few characters (case insensitive) of an RPM to filter the results (blank line for no filter): Only filtered results that contain less than 100 packages will have their contents displayed. Results with more than 100 packages will display a package count only. Packages: bear-4.1-1.noarch.rpm", "Packages: bear-4.1-1.noarch.rpm", "Package Count: 8001", "No packages in the repository.", "rhui-manager repo set_retain_versions [--repo_id <ID> or --all] --versions <NUMBER>", "rhui-manager repo set_retain_versions --all --versions 5", "rhui-manager repo orphan_cleanup" ]
https://docs.redhat.com/en/documentation/red_hat_update_infrastructure/4/html/configuring_and_managing_red_hat_update_infrastructure/assembly_cmg-managing-repositories_configuring-and-managing-red-hat-update-infrastructure
Chapter 4. Troubleshooting the Block Storage backup service
Chapter 4. Troubleshooting the Block Storage backup service There are two common scenarios that cause many of the issues that occur with the backup service: When the cinder-backup service starts, it connects to its configured backend and uses this as a target for backups. Problems with this connection can cause services to fail. When backups are requested, the backup service connects to the volume service and attaches the requested volume. Problems with this connection are evident only during backup time. In either case, the logs contain messages that describe the error. For more information about log files and services, see Location of log files for OpenStack services in the Logging, Monitoring and Troubleshooting Guide . For more information about log locations and troubleshooting suggestions, see Block Storage (cinder) Log Files in the Logging, Monitoring and Troubleshooting Guide . 4.1. Verifying services You can diagnose many issues by verifying that services are available and by checking log files for error messages. After you verify the status of the services, check the cinder-backup.log file. The Block Storage Backup service log is located in /var/log/containers/cinder]/cinder-backup.log . Procedure Run the cinder show command on the volume to see if it is stored by the host: Run the cinder service-list command to view running services: Verify that the expected services are available. 4.2. Querying the status of a failed backup Backups are asynchronous. The Block Storage backup service performs a small number of static checks upon receiving an API request, such as checking for an invalid volume reference ( missing ) or a volume that is in-use or attached to an instance. The in-use case requires you to use the --force option. Note Using the --force option means that I/O is not be quiesced and the resulting volume image may be corrupt. If the API accepts the request, the backup occurs in the background. Usually, the CLI returns immediately even if the backup fails or is approaching failure. You can query the status of a backup by using the cinder backup API. If an error occurs, review the logs to discover the cause. 4.3. Using Pacemaker to manage resources By default, Pacemaker deploys the Block Storage backup service. Pacemaker configures virtual IP addresses, containers, services, and other features as resources in a cluster to ensure that the defined set of Red Hat OpenStack Platform cluster resources are running and available. When a service or an entire node in a cluster fails, Pacemaker can restart the resource, take the node out of the cluster, or reboot the node. Requests to most services are through HAProxy. For information about how to use Pacemaker for troubleshooting, see Managing high availability services with Pacemaker in the High Availability Deployment and Usage guide.
[ "cinder show", "cinder service-list +------------------+--------------------+------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+--------------------+------+---------+-------+----------------------------+-----------------+ | cinder-backup | hostgroup | nova | enabled | up | 2017-05-15T02:42:25.000000 | - | | cinder-scheduler | hostgroup | nova | enabled | up | 2017-05-15T02:42:25.000000 | - | | cinder-volume | hostgroup@sas-pool | nova | enabled | down | 2017-05-14T03:04:01.000000 | - | | cinder-volume | hostgroup@ssd-pool | nova | enabled | down | 2017-05-14T03:04:01.000000 | - | +------------------+--------------------+------+---------+-------+----------------------------+-----------------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/block_storage_backup_guide/assembly_backup-troubleshooting
probe::tty.poll
probe::tty.poll Name probe::tty.poll - Called when a tty device is being polled Synopsis tty.poll Values file_name the tty file name wait_key the wait queue key
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-tty-poll
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/troubleshooting_openshift_data_foundation/making-open-source-more-inclusive
10.5.56. AddHandler
10.5.56. AddHandler AddHandler maps file extensions to specific handlers. For example, the cgi-script handler can be matched with the extension .cgi to automatically treat a file ending with .cgi as a CGI script. The following is a sample AddHandler directive for the .cgi extension. This directive enables CGIs outside of the cgi-bin to function in any directory on the server which has the ExecCGI option within the directories container. Refer to Section 10.5.22, " Directory " for more information about setting the ExecCGI option for a directory. In addition to CGI scripts, the AddHandler directive is used to process server-parsed HTML and image-map files.
[ "AddHandler cgi-script .cgi" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-addhandler
Chapter 5. Remote JNDI lookup
Chapter 5. Remote JNDI lookup 5.1. Registering Objects to Java Naming and Directory Interface The Java Naming and Directory Interface is a Java API for a directory service that allows Java software clients to discover and look up objects using a name. If an object registered to Java Naming and Directory Interface needs to be looked up by remote Java Naming and Directory Interface clients, for example clients that run in a separate JVM, then it must be registered under the java:jboss/exported context. For example, if a Jakarta Messaging queue in the messaging-activemq subsystem must be exposed for remote Java Naming and Directory Interface clients, then it must be registered to Java Naming and Directory Interface using java:jboss/exported/jms/queue/myTestQueue . The remote Java Naming and Directory Interface client can then look it up by the name jms/queue/myTestQueue . Example: Configuration of the Queue in standalone-full(-ha).xml <subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0"> <server name="default"> ... <jms-queue name="myTestQueue" entries="java:jboss/exported/jms/queue/myTestQueue"/> ... </server> </subsystem> 5.2. Configuring Remote JNDI A remote JNDI client can connect and look up objects by name from JNDI. To use a remote JNDI client to look up objects, it must have the jboss-client.jar in its class path. The jboss-client.jar is available at EAP_HOME /bin/client/jboss-client.jar . The following example shows how to look up the myTestQueue queue from JNDI in a remote JNDI client: Example: Configuration for an MDB Resource Adapter Properties properties = new Properties(); properties.put(Context.INITIAL_CONTEXT_FACTORY, "org.wildfly.naming.client.WildFlyInitialContextFactory"); properties.put(Context.PROVIDER_URL, "remote+http:// HOST_NAME :8080"); context = new InitialContext(properties); Queue myTestQueue = (Queue) context.lookup("jms/queue/myTestQueue"); 5.3. JNDI Invocation Over HTTP JNDI invocation over HTTP includes two distinct parts: the client-side and the server-side implementations. 5.3.1. Client-side Implementation The client-side implementation is similar to the remote naming implementation, but based on HTTP using the Undertow HTTP client. Connection management is implicit rather than direct, using a caching approach similar to the one used in the existing remote naming implementation. Connection pools are cached based on connection parameters. If they are not used in the specified timeout period, they are discarded. In order to configure a remote JNDI client application to use HTTP transport, you must add the following dependency on the HTTP transport implementation: To perform the HTTP invocation, you must use the http URL scheme and include the context name of the HTTP invoker, wildfly-services . For example, if you are using remote+http://localhost:8080 as the target URL, in order to use HTTP transport, you must update this to http://localhost:8080/wildfly-services . 5.3.2. Server-side Implementation The server-side implementation is similar to the existing remote naming implementation but with an HTTP transport. In order to configure the server, you must enable the http-invoker on each of the virtual hosts that you wish to use in the undertow subsystem. This is enabled by default in the standard configurations. If it is disabled, you can re-enable it using the following management CLI command: The http-invoker attribute takes two parameters: a path that defaults to /wildfly-services and an http-authentication-factory that must be a reference to an Elytron http-authentication-factory . Note Any deployment that aims to use the http-authentication-factory must use Elytron security with the same security domain corresponding to the specified HTTP authentication factory.
[ "<subsystem xmlns=\"urn:jboss:domain:messaging-activemq:4.0\"> <server name=\"default\"> <jms-queue name=\"myTestQueue\" entries=\"java:jboss/exported/jms/queue/myTestQueue\"/> </server> </subsystem>", "Properties properties = new Properties(); properties.put(Context.INITIAL_CONTEXT_FACTORY, \"org.wildfly.naming.client.WildFlyInitialContextFactory\"); properties.put(Context.PROVIDER_URL, \"remote+http:// HOST_NAME :8080\"); context = new InitialContext(properties); Queue myTestQueue = (Queue) context.lookup(\"jms/queue/myTestQueue\");", "<dependency> <groupId>org.wildfly.wildfly-http-client</groupId> <artifactId>wildfly-http-naming-client</artifactId> </dependency>", "/subsystem=undertow/server=default-server/host=default-host/setting=http-invoker:add(http-authentication-factory=myfactory, path=\"/wildfly-services\")" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/development_guide/remote_jndi_lookup
Chapter 14. Troubleshooting and maintaining the Load-balancing service
Chapter 14. Troubleshooting and maintaining the Load-balancing service Basic troubleshooting and maintenance for the Load-balancing service (octavia) starts with being familiar with the OpenStack client commands for showing status and migrating instances, and knowing how to access logs. If you need to troubleshoot more in depth, you can SSH into one or more Load-balancing service instances (amphorae). Section 14.1, "Verifying the load balancer" Section 14.2, "Load-balancing service instance administrative logs" Section 14.3, "Migrating a specific Load-balancing service instance" Section 14.4, "Using SSH to connect to load-balancing instances" Section 14.5, "Showing listener statistics" Section 14.6, "Interpreting listener request errors" 14.1. Verifying the load balancer You can troubleshoot the Load-balancing service (octavia) and its various components by viewing the output of the load balancer show and list commands. Procedure Source your credentials file. Example Verify the load balancer ( lb1 ) settings. Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with ones that are appropriate for your site. Example Sample output Using the loadbalancer ID ( 265d0b71-c073-40f4-9718-8a182c6d53ca ) from the step, obtain the ID of the amphora associated with the load balancer ( lb1 ). Example Sample output Using the amphora ID ( 1afabefd-ba09-49e1-8c39-41770aa25070 ) from the step, view amphora information. Example Sample output View the listener ( listener1 ) details. Example Sample output View the pool ( pool1 ) and load-balancer members. Example Sample output Verify HTTPS traffic flows across a load balancer whose listener is configured for HTTPS or TERMINATED_HTTPS protocols by connecting to the VIP address ( 192.0.2.177 ) of the load balancer. Tip Obtain the load-balancer VIP address by using the command, openstack loadbalancer show <load_balancer_name> . Note Security groups implemented for the load balancer VIP only allow data traffic for the required protocols and ports. For this reason you cannot ping load balancer VIPs, because ICMP traffic is blocked. Example Sample output Additional resources loadbalancer in the Command Line Interface Reference 14.2. Load-balancing service instance administrative logs The administrative log offloading feature of the Load-balancing service instance (amphora) covers all of the system logging inside the amphora except for the tenant flow logs. You can send tenant flow logs to the same syslog receiver where the administrative logs are sent. You can send tenant flow logs to the same syslog receiver that processes the administrative logs, but you must configure the tenant flow logs separately. The amphora sends all administrative log messages by using the native log format for the application sending the message. The amphorae log to the Red Hat OpenStack Platform (RHOSP) Controller node in the same location as the other RHOSP logs ( /var/log/containers/octavia/ ). Additional resources Chapter 5, Managing Load-balancing service instance logs 14.3. Migrating a specific Load-balancing service instance In some cases you must migrate a Load-balancing service instance (amphora). For example, if the host is being shut down for maintenance Procedure Source your credentials file. Example Locate the ID of the amphora that you want to migrate. You need to provide the ID in a later step. To prevent the Compute scheduler service from scheduling any new amphorae to the Compute node being evacuated, disable the Compute node ( compute-host-1 ). Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with ones that are appropriate for your site. Example Fail over the amphora by using the amphora ID ( ea17210a-1076-48ff-8a1f-ced49ccb5e53 ) that you obtained. Example Additional resources compute service set in the Command Line Interface Reference loadbalancer in the Command Line Interface Reference 14.4. Using SSH to connect to load-balancing instances Use SSH to log in to Load-balancing service instances (amphorae) when troubleshooting service problems. It can be helpful to use Secure Shell (SSH) to log into running Load-balancing service instances (amphorae) when troubleshooting service problems. Prerequisites You must have the Load-balancing service (octavia) SSH private key. Procedure On the director node, start ssh-agent and add your user identity key to the agent: Source your credentials file. Example Determine the IP address on the load-balancing management network ( lb_network_ip ) for the amphora that you want to connect to: Use SSH to connect to the amphora: When you are finished, close your connection to the amphora and stop the SSH agent: Additional resources loadbalancer in the Command Line Interface Reference 14.5. Showing listener statistics Using the OpenStack Client, you can obtain statistics about the listener for a particular Red Hat OpenStack Platform (RHOSP) loadbalancer: current active connections ( active_connections ). total bytes received ( bytes_in ). total bytes sent ( bytes_out ). total requests that were unable to be fulfilled ( request_errors ). total connections handled ( total_connections ). Procedure Source your credentials file. Example View the stats for the listener ( listener1 ). Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with ones that are appropriate for your site. Example Tip If you do not know the name of the listener, enter the command loadbalancer listener list . Sample output Additional resources loadbalancer listener stats show in the Command Line Interface Reference Section 14.6, "Interpreting listener request errors" 14.6. Interpreting listener request errors You can obtain statistics about the listener for a particular Red Hat OpenStack Platform (RHOSP) loadbalancer. For more information, see Section 14.5, "Showing listener statistics" . One of the statistics tracked by the RHOSP loadbalancer, request_errors , is only counting errors that occurred in the request from the end user connecting to the load balancer. The request_errors variable is not measuring errors reported by the member server. For example, if a tenant connects through the RHOSP Load-balancing service (octavia) to a web server that returns an HTTP status code of 400 (Bad Request) , this error is not collected by the Load-balancing service. Loadbalancers do not inspect the content of data traffic. In this example, the loadbalancer interprets this flow as successful because it transported information between the user and the web server correctly. The following conditions can cause the request_errors variable to increment: early termination from the client, before the request has been sent. read error from the client. client timeout. client closed the connection. various bad requests from the client. Additional resources loadbalancer listener stats show in the Command Line Interface Reference Section 14.5, "Showing listener statistics"
[ "source ~/overcloudrc", "openstack loadbalancer show lb1", "+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2022-02-17T15:59:18 | | description | | | flavor_id | None | | id | 265d0b71-c073-40f4-9718-8a182c6d53ca | | listeners | 5aaa67da-350d-4125-9022-238e0f7b7f6f | | name | lb1 | | operating_status | ONLINE | | pools | 48f6664c-b192-4763-846a-da568354da4a | | project_id | 52376c9c5c2e434283266ae7cacd3a9c | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2022-02-17T16:01:21 | | vip_address | 192.0.2.177 | | vip_network_id | afeaf55e-7128-4dff-80e2-98f8d1f2f44c | | vip_port_id | 94a12275-1505-4cdc-80c9-4432767a980f | | vip_qos_policy_id | None | | vip_subnet_id | 06ffa90e-2b86-4fe3-9731-c7839b0be6de | +---------------------+--------------------------------------+", "openstack loadbalancer amphora list | grep 265d0b71-c073-40f4-9718-8a182c6d53ca", "| 1afabefd-ba09-49e1-8c39-41770aa25070 | 265d0b71-c073-40f4-9718-8a182c6d53ca | ALLOCATED | STANDALONE | 198.51.100.7 | 192.0.2.177 |", "openstack loadbalancer amphora show 1afabefd-ba09-49e1-8c39-41770aa25070", "+-----------------+--------------------------------------+ | Field | Value | +-----------------+--------------------------------------+ | id | 1afabefd-ba09-49e1-8c39-41770aa25070 | | loadbalancer_id | 265d0b71-c073-40f4-9718-8a182c6d53ca | | compute_id | ba9fc1c4-8aee-47ad-b47f-98f12ea7b200 | | lb_network_ip | 198.51.100.7 | | vrrp_ip | 192.0.2.36 | | ha_ip | 192.0.2.177 | | vrrp_port_id | 07dcd894-487a-48dc-b0ec-7324fe5d2082 | | ha_port_id | 94a12275-1505-4cdc-80c9-4432767a980f | | cert_expiration | 2022-03-19T15:59:23 | | cert_busy | False | | role | STANDALONE | | status | ALLOCATED | | vrrp_interface | None | | vrrp_id | 1 | | vrrp_priority | None | | cached_zone | nova | | created_at | 2022-02-17T15:59:22 | | updated_at | 2022-02-17T16:00:50 | | image_id | 53001253-5005-4891-bb61-8784ae85e962 | | compute_flavor | 65 | +-----------------+--------------------------------------+", "openstack loadbalancer listener show listener1", "+-----------------------------+--------------------------------------+ | Field | Value | +-----------------------------+--------------------------------------+ | admin_state_up | True | | connection_limit | -1 | | created_at | 2022-02-17T16:00:59 | | default_pool_id | 48f6664c-b192-4763-846a-da568354da4a | | default_tls_container_ref | None | | description | | | id | 5aaa67da-350d-4125-9022-238e0f7b7f6f | | insert_headers | None | | l7policies | | | loadbalancers | 265d0b71-c073-40f4-9718-8a182c6d53ca | | name | listener1 | | operating_status | ONLINE | | project_id | 52376c9c5c2e434283266ae7cacd3a9c | | protocol | HTTP | | protocol_port | 80 | | provisioning_status | ACTIVE | | sni_container_refs | [] | | timeout_client_data | 50000 | | timeout_member_connect | 5000 | | timeout_member_data | 50000 | | timeout_tcp_inspect | 0 | | updated_at | 2022-02-17T16:01:21 | | client_ca_tls_container_ref | None | | client_authentication | NONE | | client_crl_container_ref | None | | allowed_cidrs | None | +-----------------------------+--------------------------------------+", "openstack loadbalancer pool show pool1", "+----------------------+--------------------------------------+ | Field | Value | +----------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2022-02-17T16:01:08 | | description | | | healthmonitor_id | 4b24180f-74c7-47d2-b0a2-4783ada9a4f0 | | id | 48f6664c-b192-4763-846a-da568354da4a | | lb_algorithm | ROUND_ROBIN | | listeners | 5aaa67da-350d-4125-9022-238e0f7b7f6f | | loadbalancers | 265d0b71-c073-40f4-9718-8a182c6d53ca | | members | b92694bd-3407-461a-92f2-90fb2c4aedd1 | | | 4ccdd1cf-736d-4b31-b67c-81d5f49e528d | | name | pool1 | | operating_status | ONLINE | | project_id | 52376c9c5c2e434283266ae7cacd3a9c | | protocol | HTTP | | provisioning_status | ACTIVE | | session_persistence | None | | updated_at | 2022-02-17T16:01:21 | | tls_container_ref | None | | ca_tls_container_ref | None | | crl_container_ref | None | | tls_enabled | False | +----------------------+--------------------------------------+", "curl -v https://192.0.2.177 --insecure", "* About to connect() to 192.0.2.177 port 443 (#0) * Trying 192.0.2.177 * Connected to 192.0.2.177 (192.0.2.177) port 443 (#0) * Initializing NSS with certpath: sql:/etc/pki/nssdb * skipping SSL peer certificate verification * SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 * Server certificate: * subject: CN=www.example.com,O=Dis,L=Springfield,ST=Denial,C=US * start date: Jan 15 09:21:45 2021 GMT * expire date: Jan 15 09:21:45 2021 GMT * common name: www.example.com * issuer: CN=www.example.com,O=Dis,L=Springfield,ST=Denial,C=US > GET / HTTP/1.1 > User-Agent: curl/7.29.0 > Host: 192.0.2.177 > Accept: */* > < HTTP/1.1 200 OK < Content-Length: 30 < * Connection #0 to host 192.0.2.177 left intact", "source ~/overcloudrc", "openstack loadbalancer amphora list", "openstack compute service set compute-host-1 nova-compute --disable", "openstack loadbalancer amphora failover ea17210a-1076-48ff-8a1f-ced49ccb5e53", "eval USD(ssh-agent -s) sudo -E ssh-add /etc/octavia/ssh/octavia_id_rsa", "source ~/overcloudrc", "openstack loadbalancer amphora list", "ssh -A -t tripleo-admin@<controller_node_IP_address> ssh cloud-user@<lb_network_ip>", "exit", "source ~/overcloudrc", "openstack loadbalancer listener stats show listener1", "+--------------------+-------+ | Field | Value | +--------------------+-------+ | active_connections | 0 | | bytes_in | 0 | | bytes_out | 0 | | request_errors | 0 | | total_connections | 0 | +--------------------+-------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/using_octavia_for_load_balancing-as-a-service/troubleshoot-maintain-lb-service_rhosp-lbaas
Chapter 1. RBAC APIs
Chapter 1. RBAC APIs 1.1. ClusterRoleBinding [rbac.authorization.k8s.io/v1] Description ClusterRoleBinding references a ClusterRole, but not contain it. It can reference a ClusterRole in the global namespace, and adds who information via Subject. Type object 1.2. ClusterRole [rbac.authorization.k8s.io/v1] Description ClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding. Type object 1.3. RoleBinding [rbac.authorization.k8s.io/v1] Description RoleBinding references a role, but does not contain it. It can reference a Role in the same namespace or a ClusterRole in the global namespace. It adds who information via Subjects and namespace information by which namespace it exists in. RoleBindings in a given namespace only have effect in that namespace. Type object 1.4. Role [rbac.authorization.k8s.io/v1] Description Role is a namespaced, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding. Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/rbac_apis/rbac-apis
11.3.3.7. Special Options
11.3.3.7. Special Options These options are occasionally useful for overriding defaults often found in the .fetchmailrc file. -a - Fetchmail downloads all messages from the remote email server, whether new or previously viewed. By default, Fetchmail only downloads new messages. -k - Fetchmail leaves the messages on the remote email server after downloading them. This option overrides the default behavior of deleting messages after downloading them. -l <max-number-bytes> - Fetchmail does not download any messages over a particular size and leaves them on the remote email server. --quit - Quits the Fetchmail daemon process. More commands and .fetchmailrc options can be found in the fetchmail man page.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s3-email-mda-fetchmail-commands-special
Chapter 1. Security options
Chapter 1. Security options You can configure security settings for Cryostat, so that you can better protect your Cryostat instance. Cryostat can encrypt and store credentials for a target JVM application in a database that is stored on a persistent volume claim (PVC) on Red Hat OpenShift. Cryostat supports SSL/TLS on the HTTP request that adds credentials to the database and on the JMX connection that uses those credentials to connect to the target application. Cryostat also encrypts the credentials within the database by using a passphrase that is either provided by the user or that is generated by the Red Hat build of Cryostat Operator. You can use the Cryostat Operator to configure Cryostat to trust SSL/TLS certificates from specific applications by adding these certificates to a secret and by configuring the Cryostat custom resource (CR) to include this secret. For more information, see Using the Red Hat build of Cryostat Operator to configure Cryostat: Configuring TLS certificates . You can view the list of imported SSL/TLS certificates for a target JVM by clicking the Security menu in the Cryostat web console. Figure 1.1. Viewing the list of imported SSL certificates for a target JVM 1.1. Storing and managing credentials If you enable Java Management Extensions (JMX) authentication or HTTP authentication for your target JVM application, Cryostat prompts you to enter your credentials before Cryostat can access any of the application's JFR recordings. When you click the Recordings or Events menu item on the Cryostat web console, an Authentication Required window opens on the console. You must enter the username and password of the target JVM application. You can then view the recordings or perform any additional recording operations on the application. Figure 1.2. Example of a Cryostat Authentication Required window Cryostat stores credentials that it uses to connect to Cryostat agents or target JVMs. Important If you need to restart your target JVM application, ensure that you complete one of the following tasks to avoid losing JFR recording data for the application: Click the Recordings menu item on the Cryostat web console and archive your JFR recording. Create an automated rule that schedules Cryostat to copy a snapshot recording to the storage location for the Cryostat archives. When you want to monitor multiple target JVMs by creating an automated rule, you can configure Cryostat to store and then reuse your credentials for each target JVM connection. By using this configuration, you do not need to re-enter your credentials whenever you want to revisit the JFR recording for your application on the Cryostat web console. Prerequisites Enabled JMX or HTTP authentication for your target JVM application. Procedure Click the Security menu item. From the Store Credentials window, click the Add button. The Store Credentials window opens. Figure 1.3. Example of a Store Credentials window In the Match Expression field, specify the match expression details. Note Select the question mark icon to view suggested syntax in a Match Expression Hint snippet. Click Save . A table entry is displayed in the Store Credentials window that shows the Match Expression for your target JVM. Figure 1.4. Example of a table entry on the Store Credentials pane Important For security purposes, a table entry does not display your username or password. Optional: If you want to delete your stored credentials for a target JVM, you can select the checkbox to the table entry for this target JVM and then click Delete .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/using_cryostat_to_manage_a_jfr_recording/assembly_security-options_cryostat
12.4. CTDB Configuration
12.4. CTDB Configuration The CTDB configuration file is located at /etc/sysconfig/ctdb . The mandatory fields that must be configured for CTDB operation are as follows: CTDB_NODES CTDB_PUBLIC_ADDRESSES CTDB_RECOVERY_LOCK CTDB_MANAGES_SAMBA (must be enabled) CTDB_MANAGES_WINBIND (must be enabled if running on a member server) The following example shows a configuration file with the mandatory fields for CTDB operation set with example parameters: The meaning of these parameters is as follows. CTDB_NODES Specifies the location of the file which contains the cluster node list. The /etc/ctdb/nodes file that CTDB_NODES references simply lists the IP addresses of the cluster nodes, as in the following example: In this example, there is only one interface/IP on each node that is used for both cluster/CTDB communication and serving clients. However, it is highly recommended that each cluster node have two network interfaces so that one set of interfaces can be dedicated to cluster/CTDB communication and another set of interfaces can be dedicated to public client access. Use the appropriate IP addresses of the cluster network here and make sure the hostnames/IP addresses used in the cluster.conf file are the same. Similarly, use the appropriate interfaces of the public network for client access in the public_addresses file. It is critical that the /etc/ctdb/nodes file is identical on all nodes because the ordering is important and CTDB will fail if it finds different information on different nodes. CTDB_PUBLIC_ADDRESSES Specifies the location of the file that lists the IP addresses that can be used to access the Samba shares exported by this cluster. These are the IP addresses that you should configure in DNS for the name of the clustered Samba server and are the addresses that CIFS clients will connect to. Configure the name of the clustered Samba server as one DNS type A record with multiple IP addresses and let round-robin DNS distribute the clients across the nodes of the cluster. For this example, we have configured a round-robin DNS entry csmb-server with all the addresses listed in the /etc/ctdb/public_addresses file. DNS will distribute the clients that use this entry across the cluster in a round-robin fashion. The contents of the /etc/ctdb/public_addresses file on each node are as follows: This example uses three addresses that are currently unused on the network. In your own configuration, choose addresses that can be accessed by the intended clients. Alternately, this example shows the contents of the /etc/ctdb/public_addresses files in a cluster in which there are three nodes but a total of four public addresses. In this example, IP address 198.162.2.1 can be hosted by either node 0 or node 1 and will be available to clients as long as at least one of these nodes is available. Only if both nodes 0 and 1 fail does this public address become unavailable to clients. All other public addresses can only be served by one single node respectively and will therefore only be available if the respective node is also available. The /etc/ctdb/public_addresses file on node 0 includes the following contents: The /etc/ctdb/public_addresses file on node 1 includes the following contents: The /etc/ctdb/public_addresses file on node 2 includes the following contents: CTDB_RECOVERY_LOCK Specifies a lock file that CTDB uses internally for recovery. This file must reside on shared storage such that all the cluster nodes have access to it. The example in this section uses the GFS2 file system that will be mounted at /mnt/ctdb on all nodes. This is different from the GFS2 file system that will host the Samba share that will be exported. This recovery lock file is used to prevent split-brain scenarios. With newer versions of CTDB (1.0.112 and later), specifying this file is optional as long as it is substituted with another split-brain prevention mechanism. CTDB_MANAGES_SAMBA When enabling by setting it to yes , specifies that CTDB is allowed to start and stop the Samba service as it deems necessary to provide service migration/failover. When CTDB_MANAGES_SAMBA is enabled, you should disable automatic init startup of the smb and nmb daemons by executing the following commands: CTDB_MANAGES_WINBIND When enabling by setting it to yes , specifies that CTDB is allowed to start and stop the winbind daemon as required. This should be enabled when you are using CTDB in a Windows domain or in active directory security mode. When CTDB_MANAGES_WINBIND is enabled, you should disable automatic init startup of the winbind daemon by executing the following command:
[ "CTDB_NODES=/etc/ctdb/nodes CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses CTDB_RECOVERY_LOCK=\"/mnt/ctdb/.ctdb.lock\" CTDB_MANAGES_SAMBA=yes CTDB_MANAGES_WINBIND=yes", "192.168.1.151 192.168.1.152 192.168.1.153", "192.168.1.201/0 eth0 192.168.1.202/0 eth0 192.168.1.203/0 eth0", "198.162.1.1/24 eth0 198.162.2.1/24 eth1", "198.162.2.1/24 eth1 198.162.3.1/24 eth2", "198.162.3.2/24 eth2", "chkconfig snb off chkconfig nmb off", "chkconfig windinbd off" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-CTDB-Configuration-CA
Installing and viewing dynamic plugins
Installing and viewing dynamic plugins Red Hat Developer Hub 1.3 Red Hat Customer Content Services
[ "kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: './dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic' disabled: false pluginConfig: catalog: providers: github: organization: \"USD{GITHUB_ORG}\" schedule: frequency: { minutes: 1 } timeout: { minutes: 1 } initialDelay: { seconds: 100 }", "apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: name: my-rhdh spec: application: dynamicPluginsConfigMapName: dynamic-plugins-rhdh", "[ { \"name\": \"backstage-plugin-catalog-backend-module-github-dynamic\", \"version\": \"0.5.2\", \"platform\": \"node\", \"role\": \"backend-plugin-module\" }, { \"name\": \"backstage-plugin-techdocs\", \"version\": \"1.10.0\", \"role\": \"frontend-plugin\", \"platform\": \"web\" }, { \"name\": \"backstage-plugin-techdocs-backend-dynamic\", \"version\": \"1.9.5\", \"platform\": \"node\", \"role\": \"backend-plugin\" }, ]", "npm view <package name>@<version> dist.integrity", "global: dynamic: plugins: - package: <alocal package-spec used by npm pack> - package: <external package-spec used by npm pack> integrity: sha512-<some hash> pluginConfig:", "global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.default.yaml> disabled: true", "global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.custom.yaml> disabled: false", "global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.custom.yaml> disabled: false", "global: dynamic: plugins: - package: '@janus-idp/[email protected]' # Integrity can be found at https://registry.npmjs.org/@janus-idp/plugin-notifications-backend-dynamic integrity: 'sha512-Qd8pniy1yRx+x7LnwjzQ6k9zP+C1yex24MaCcx7dGDPT/XbTokwoSZr4baSSn8jUA6P45NUUevu1d629mG4JGQ==' - package: '@janus-idp/[email protected]' # https://registry.npmjs.org/@janus-idp/plugin-notifications integrity: 'sha512-GCdEuHRQek3ay428C8C4wWgxjNpNwCXgIdFbUUFGCLLkBFSaOEw+XaBvWaBGtQ5BLgE3jQEUxa+422uzSYC5oQ==' pluginConfig: dynamicPlugins: frontend: janus-idp.backstage-plugin-notifications: appIcons: - name: notificationsIcon module: NotificationsPlugin importName: NotificationsActiveIcon dynamicRoutes: - path: /notifications importName: NotificationsPage module: NotificationsPlugin menuItem: icon: notificationsIcon text: Notifications config: pollingIntervalMs: 5000 - package: '@janus-idp/[email protected]' # https://registry.npmjs.org/@janus-idp/backstage-scaffolder-backend-module-kubernetes-dynamic integrity: 'sha512-19ie+FM3QHxWYPyYzE0uNdI5K8M4vGZ0SPeeTw85XPROY1DrIY7rMm2G0XT85L0ZmntHVwc9qW+SbHolPg/qRA==' proxy: endpoints: /explore-backend-completed: target: 'http://localhost:7017' - package: '@dfatwork-pkgs/[email protected]' # https://registry.npmjs.org/@dfatwork-pkgs/search-backend-module-explore-wrapped-dynamic integrity: 'sha512-mv6LS8UOve+eumoMCVypGcd7b/L36lH2z11tGKVrt+m65VzQI4FgAJr9kNCrjUZPMyh36KVGIjYqsu9+kgzH5A==' - package: '@dfatwork-pkgs/[email protected]' # https://registry.npmjs.org/@dfatwork-pkgs/plugin-catalog-backend-module-test-dynamic integrity: 'sha512-YsrZMThxJk7cYJU9FtAcsTCx9lCChpytK254TfGb3iMAYQyVcZnr5AA/AU+hezFnXLsr6gj8PP7z/mCZieuuDA=='", "apiVersion: v1 kind: Secret metadata: name: dynamic-plugins-npmrc type: Opaque stringData: .npmrc: | registry=<registry-url> //<registry-url>:_authToken=<auth-token>" ]
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html-single/installing_and_viewing_dynamic_plugins/index
5.291. samba
5.291. samba 5.291.1. RHBA-2012:0850 - samba bug fix update Updated samba packages that fix several bugs are now available for Red Hat Enterprise Linux 6. Samba is an open-source implementation of the Server Message Block (SMB) and Common Internet File System (CIFS) protocol, which allows PC-compatible machines to share files, printers, and other information. Bug Fixes BZ# 753143 When using Samba with the "password server" configuration setting and when the given name for that parameter was a hostname that resolved to multiple IP addresses, Samba did not correctly handle the returned addresses. Consequently, Samba failed to use one of the password servers and terminate unexpectedly. This update fixes Samba to correctly process multiple IP addresses when using a hostname with the "password server" parameter. Samba now works correctly with multiple IP addresses in the scenario described. BZ# 753747 When Samba was configured to operate in an Active Directory (AD) environment it sometimes created invalid DNS SRV queries. This happened when an empty sitename was used to compose the SRV record search string. Consequently, Samba-generated log files contained many DNS related error messages. Samba has been fixed to always generate a correct DNS SRV query and the DNS-related error message no longer occur. BZ# 755347 The smbclient tool sometimes failed to return the expected exit status code; it returned 0 instead of 1. Consequently, using smbclient in a script caused some scripts to fail. With this update, an upstream patch has been applied and smbclient now returns the correct exit status. BZ# 767656 Previously, the Winbind IDMAP interface cache did not expire as specified in the smb.conf file. Consequently, the positive and negative entries in the cache would not expire until the opposite type of query was made. This update contains a backported fix for the problem. As a result, the idmap cache time and idmap negative cache time directives now work as expected. BZ# 767659 When calling "getent passwd" for a user who had no UID, if winbind was joined to the domain with idmap_ad specified as the backend, enumerating users was enabled, and most of the users had UIDs, the enumeration stopped and the following error was displayed: This update implements an upstream patch to correct the problem. As a result, if a user cannot be mapped, winbind no longer stops but continues enumerating users in the scenario described. BZ# 771812 Samba sometimes generated many debug messages such as "Could not find child XXXX -- ignoring" that were written to syslog. Consequently, although these messages are not critical, syslog could be flooded by the large amount of these messages. Samba has been fixed to no longer issue this message to syslog automatically and syslog is no longer flooded by these samba debug messages. BZ# 788089 The pam_winbind utility used an undocumented PAM_RADIO_TYPE message which has no documented semantics. This caused the login manager gdm to terminate unexpectedly when pam_winbind was used on the system. Consequently, users could not log in when using pam_winbind. Samba has been fixed to not use the PAM_RADIO_TYPE message. Users can now use pam_winbind for authentication in GDM. BZ# 808449 Newer versions of Windows could not properly set Access Control Lists (ACLs) on a Samba share. The users were receiving an "access denied" warning. Consequently, administrators or users could not fully control ACLs on a Samba share. This update fixes the problem in Samba and ACLs can now be used as expected. BZ# 816123 An update of the system Kerberos library to a recent version made Samba binaries and libraries suddenly unusable because Samba was using a private library symbol. Consequently, Samba was no longer usable after a Kerberos update. This update corrects Samba to no longer use that private symbol. Samba now continues to operate when the Kerberos library has been updated. All users of samba are advised to upgrade to these updated packages, which fix these bugs.
[ "NT_STATUS_NONE_MAPPED" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/samba
8.121. net-snmp
8.121. net-snmp 8.121.1. RHBA-2013:1693 - net-snmp bug fix and enhancement update Updated net-snmp packages that fix several bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. The net-snmp packages provide a generic client library, a suite of command-line tools, an extensible SNMP agent, Perl modules, and Python modules to use and deploy the Simple Network Management Protocol (SNMP). Bug Fixes BZ# 893119 Previously, snmpd, the SNMP daemon, did not check for errors when populating data for the UCD-SNMP-MIB::extTable table and could leak memory when the system ran out of memory. This bug has been fixed and snmpd now checks for out-of-memory conditions and frees the memory for the UCD-SNMP-MIB::extTable table when encounters an error. BZ# 907571 Previously, the snmp_config(5) manual page was not clear about which files were looked for and the reader could get the incorrect impression that any file with a suffix "conf" or "local.conf" could be used as an snmp configuration file. In this update, the snmp_config(5) manual page has been modified to precisely specify which files are used as snmp configuration files. BZ# 919259 In a update, the snmpd daemon was fixed to show the executable name and all the command-line arguments in the UCD-SNMP-MIB::extCommand OID string. The fix did not check for executables without command-line arguments. Consequently, the snmpd daemon terminated unxpectedly with a segmentation fault when retrieving the value of the UCD-SNMP-MIB::extCommand OID of an executable with no arguments. With this update, snmpd now checks if there are no arguments and shows the correct value of the UCD-SNMP-MIB::extCommand OID. As a result, crashes no longer occur in the described scenario. BZ# 919952 In net-snmp package updates, the HOST-RESOURCES-MIB::hrSWRunTable table was rewritten, and, due to a regression, it did not report the "hrSWRunPath" string of kernel threads. This update fixes the HOST-RESOURCES-MIB::hrSWRunPath string of kernel threads and is now reported by the snmpd daemon. BZ# 922691 When the "includeAllDisks" configuration option was specified in the /etc/snmp/snmpd.conf file, the snmpd daemon scanned the running system only at startup and did not update the UCD-SNMP-MIB::dskTable table if a new device was mounted later. As a consequence, on dynamic systems where devices are frequently mounted and unmounted, UCD-SNMP-MIB::dskTable could not be used to monitor storage usage, because it monitored only devices which were available at system start. To fix this bug, the implementation of UCD-SNMP-MIB::dskTable was enhanced to dynamically add new devices as they are mounted. This happens only when the "includeAddDisks" configuration option is used in /etc/snmp/snmpd.conf. As a result, in dynamic systems where devices are frequently mounted and unmounted, UCD-SNMP-MIB::dskTable always shows the current list of mounted devices. BZ# 927474 Previously, snmpd, the SNMP daemon, did not set a proper message size when communicating with the Linux kernel using a netlink socket. As a consequence, the message "netlink: 12 bytes leftover after parsing attributes." was saved to the kernel log. With this update, snmpd sets a correct message size and the kernel no longer logs the aforementioned message. BZ# 947973 In Net-SNMP releases, snmpd reported an invalid speed of network interfaces in IF-MIB::ifTable and IF-MIB::ifXTable tables if the interface had a speed other than 10, 100, 1000 or 2500 MB/s. Thus, the returned net-snmp ifHighSpeed value was "0" compared to the correct speed as reported in ethtool, if the Virtual Connect speed was set to, for example, 0.9 Gb/s. With this update, the ifHighSpeed value returns the correct speed as reported in the ethtool utility, and snmpd correctly reports non-standard network interface speeds. BZ# 953926 Net-SNMP did not verify if incoming SNMP messages were encoded properly. In some instances, it read past the receiving buffer size when parsing a message with an invalid size of an integer filed in the message. This caused snmptrapd, the SNMP trap processing daemon, to terminate unexpectedly with a segmentation fault on the incoming malformed message. This update enhances the checks of incoming messages and snmptrapd no longer crashes when parsing incoming messages with invalid integer sizes. BZ# 955771 Previously, the Net-SNMP python module did not propagate various errors to applications which use this module. As a consequence, the applications were not aware of erros, which had occurred during the SNMP communication. To fix this bug, the Net-SNMP python module has been updated to return the proper error codes. As a result, the applications now receive information about SNMP errors. BZ# 960568 In releases, the snmp-bridge-mib subagent included the bridge itself as a port of the bridge in the BRIDGE-MIB::dot1dBasePortTable table. This bug has been fixed and the snmp-bridge-mib subagent now reports only real interfaces as ports in the BRIDGE-MIB::dot1dBasePortTable table. BZ# 968898 Previously, the snmpd daemon did not properly terminate strings when processing the "agentaddress" configuration option. As a consequence, when the configuration was re-read multiple times using the SIGHUP signal, a buffer overflow occurred. This bug has been fixed and snmpd now properly terminates strings during an "agentaddress" processing and no longer crashes using the SIGHUP signal. BZ# 983116 The Net-SNMP update contained a fix to improve the checking of invalid incoming SNMP messages. This fix introduced a regression and some valid SNMP messages with multiple variables inside were marked as invalid. As a consequence, Net-SNMP tools and servers rejected valid SNMP messages and waited for a "proper" response until timeout. With this update, valid SNMP messages are no longer rejected. As a result, the servers and utilities accept the first incoming message and do not wait for a timeout. BZ# 989498 , BZ# 1006706 In the Net-SNMP updates, the implementation of the HOST-RESOURCES-MIB::hrStorageTable table was rewritten and devices with Virtuozzo File System (VZFS) and B-tree File System (BTRFS) were not reported. After this update, snmpd properly recognizes devices using VZFS and BTRFS file systems and reports them in HOST-RESOURCES-MIB::hrStorageTable. BZ# 991213 Previously the snmpd daemon incorrectly parsed Sendmail configuration files with enabled queue groups. Consequently, snmpd entered a loop on startup. This update fixes the parsing of configuration files with queue groups and snmpd no longer enters a loop on startup. BZ# 1001830 Previously, the Net-SNMP utilities and daemons blindly expected that an MD5 hash algorithm and a DES encryption were available in the system's OpenSSL libraries and did not check for errors when using these cryptographic functions. As a consequence, the Net-SNMP utilities and daemons terminated unexpectedly when attempting to use an MD5 or DES algorithm which are not available when the system is running in FIPS mode. The Net-SNMP utilities and daemons now check for cryptographic function error codes and display the following error message: As a result, the aforementioned utilities and daemons no longer crash in FIPS mode. Enhancements BZ# 917816 After this update, all net-snmp configuration files can use the "includeFile" and "includeDir" options to include other configuration files or whole directories of configuration files. Detailed syntax and usage is described in the snmp_config(5) manual page. BZ# 919239 Previously, the Net-SNMP application was shipping its configuration files, which could contain sensitive information like passwords, readable to any user on the system. After this update, the configuration files are readable only by the root user. Users of net-snmp are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
[ "Error: could not generate the authentication key from the supplied pass phrase" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/net-snmp
Chapter 132. Hazelcast Atomic Number Component
Chapter 132. Hazelcast Atomic Number Component Available as of Camel version 2.7 The Hazelcast atomic number component is one of Camel Hazelcast Components which allows you to access Hazelcast atomic number. An atomic number is an object that simply provides a grid wide number (long). There is no consumer for this endpoint! 132.1. Options The Hazelcast Atomic Number component supports 3 options, which are listed below. Name Description Default Type hazelcastInstance (advanced) The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance. HazelcastInstance hazelcastMode (advanced) The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default. node String resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Hazelcast Atomic Number endpoint is configured using URI syntax: with the following path and query parameters: 132.1.1. Path Parameters (1 parameters): Name Description Default Type cacheName Required The name of the cache String 132.1.2. Query Parameters (10 parameters): Name Description Default Type reliable (common) Define if the endpoint will use a reliable Topic struct or not. false boolean defaultOperation (producer) To specify a default operation to use, if no operation header has been provided. HazelcastOperation hazelcastInstance (producer) The hazelcast instance reference which can be used for hazelcast endpoint. HazelcastInstance hazelcastInstanceName (producer) The hazelcast instance reference name which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance. String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean concurrentConsumers (seda) To use concurrent consumers polling from the SEDA queue. 1 int onErrorDelay (seda) Milliseconds before consumer continues polling after an error has occurred. 1000 int pollTimeout (seda) The timeout used when consuming from the SEDA queue. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown. 1000 int transacted (seda) If set to true then the consumer runs in transaction mode, where the messages in the seda queue will only be removed if the transaction commits, which happens when the processing is complete. false boolean transferExchange (seda) If set to true the whole Exchange will be transfered. If header or body contains not serializable objects, they will be skipped. false boolean 132.2. Spring Boot Auto-Configuration The component supports 6 options, which are listed below. Name Description Default Type camel.component.hazelcast-atomicvalue.customizer.hazelcast-instance.enabled Enable or disable the cache-manager customizer. true Boolean camel.component.hazelcast-atomicvalue.customizer.hazelcast-instance.override Configure if the cache manager eventually set on the component should be overridden by the customizer. false Boolean camel.component.hazelcast-atomicvalue.enabled Enable hazelcast-atomicvalue component true Boolean camel.component.hazelcast-atomicvalue.hazelcast-instance The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance. The option is a com.hazelcast.core.HazelcastInstance type. String camel.component.hazelcast-atomicvalue.hazelcast-mode The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default. node String camel.component.hazelcast-atomicvalue.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 132.3. atomic number producer - to("hazelcast-atomicvalue:foo") The operations for this producer are: * setvalue (set the number with a given value) * get * increase (+1) * decrease (-1) * destroy Header Variables for the request message: Name Type Description CamelHazelcastOperationType String valid values are: setvalue, get, increase, decrease, destroy 132.3.1. Sample for set : Java DSL: from("direct:set") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.SET_VALUE)) .toF("hazelcast-%sfoo", HazelcastConstants.ATOMICNUMBER_PREFIX); Spring DSL: <route> <from uri="direct:set" /> <!-- If using version 2.8 and above set headerName to "CamelHazelcastOperationType" --> <setHeader headerName="hazelcast.operation.type"> <constant>setvalue</constant> </setHeader> <to uri="hazelcast-atomicvalue:foo" /> </route> Provide the value to set inside the message body (here the value is 10): template.sendBody("direct:set", 10); 132.3.2. Sample for get : Java DSL: from("direct:get") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.GET)) .toF("hazelcast-%sfoo", HazelcastConstants.ATOMICNUMBER_PREFIX); Spring DSL: <route> <from uri="direct:get" /> <!-- If using version 2.8 and above set headerName to "CamelHazelcastOperationType" --> <setHeader headerName="hazelcast.operation.type"> <constant>get</constant> </setHeader> <to uri="hazelcast-atomicvalue:foo" /> </route> You can get the number with long body = template.requestBody("direct:get", null, Long.class); . 132.3.3. Sample for increment : Java DSL: from("direct:increment") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.INCREMENT)) .toF("hazelcast-%sfoo", HazelcastConstants.ATOMICNUMBER_PREFIX); Spring DSL: <route> <from uri="direct:increment" /> <!-- If using version 2.8 and above set headerName to "CamelHazelcastOperationType" --> <setHeader headerName="hazelcast.operation.type"> <constant>increment</constant> </setHeader> <to uri="hazelcast-atomicvalue:foo" /> </route> The actual value (after increment) will be provided inside the message body. 132.3.4. Sample for decrement : Java DSL: from("direct:decrement") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.DECREMENT)) .toF("hazelcast-%sfoo", HazelcastConstants.ATOMICNUMBER_PREFIX); Spring DSL: <route> <from uri="direct:decrement" /> <!-- If using version 2.8 and above set headerName to "CamelHazelcastOperationType" --> <setHeader headerName="hazelcast.operation.type"> <constant>decrement</constant> </setHeader> <to uri="hazelcast-atomicvalue:foo" /> </route> The actual value (after decrement) will be provided inside the message body. 132.3.5. Sample for destroy Java DSL: from("direct:destroy") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.DESTROY)) .toF("hazelcast-%sfoo", HazelcastConstants.ATOMICNUMBER_PREFIX); Spring DSL: <route> <from uri="direct:destroy" /> <!-- If using version 2.8 and above set headerName to "CamelHazelcastOperationType" --> <setHeader headerName="hazelcast.operation.type"> <constant>destroy</constant> </setHeader> <to uri="hazelcast-atomicvalue:foo" /> </route>
[ "hazelcast-atomicvalue:cacheName", "from(\"direct:set\") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.SET_VALUE)) .toF(\"hazelcast-%sfoo\", HazelcastConstants.ATOMICNUMBER_PREFIX);", "<route> <from uri=\"direct:set\" /> <!-- If using version 2.8 and above set headerName to \"CamelHazelcastOperationType\" --> <setHeader headerName=\"hazelcast.operation.type\"> <constant>setvalue</constant> </setHeader> <to uri=\"hazelcast-atomicvalue:foo\" /> </route>", "from(\"direct:get\") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.GET)) .toF(\"hazelcast-%sfoo\", HazelcastConstants.ATOMICNUMBER_PREFIX);", "<route> <from uri=\"direct:get\" /> <!-- If using version 2.8 and above set headerName to \"CamelHazelcastOperationType\" --> <setHeader headerName=\"hazelcast.operation.type\"> <constant>get</constant> </setHeader> <to uri=\"hazelcast-atomicvalue:foo\" /> </route>", "from(\"direct:increment\") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.INCREMENT)) .toF(\"hazelcast-%sfoo\", HazelcastConstants.ATOMICNUMBER_PREFIX);", "<route> <from uri=\"direct:increment\" /> <!-- If using version 2.8 and above set headerName to \"CamelHazelcastOperationType\" --> <setHeader headerName=\"hazelcast.operation.type\"> <constant>increment</constant> </setHeader> <to uri=\"hazelcast-atomicvalue:foo\" /> </route>", "from(\"direct:decrement\") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.DECREMENT)) .toF(\"hazelcast-%sfoo\", HazelcastConstants.ATOMICNUMBER_PREFIX);", "<route> <from uri=\"direct:decrement\" /> <!-- If using version 2.8 and above set headerName to \"CamelHazelcastOperationType\" --> <setHeader headerName=\"hazelcast.operation.type\"> <constant>decrement</constant> </setHeader> <to uri=\"hazelcast-atomicvalue:foo\" /> </route>", "from(\"direct:destroy\") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.DESTROY)) .toF(\"hazelcast-%sfoo\", HazelcastConstants.ATOMICNUMBER_PREFIX);", "<route> <from uri=\"direct:destroy\" /> <!-- If using version 2.8 and above set headerName to \"CamelHazelcastOperationType\" --> <setHeader headerName=\"hazelcast.operation.type\"> <constant>destroy</constant> </setHeader> <to uri=\"hazelcast-atomicvalue:foo\" /> </route>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/hazelcast-atomicvalue-component
User and group APIs
User and group APIs OpenShift Container Platform 4.12 Reference guide for user and group APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/user_and_group_apis/index
Chapter 8. Saving and restoring virtual machines
Chapter 8. Saving and restoring virtual machines To free up system resources, you can shut down a virtual machine (VM) running on that system. However, when you require the VM again, you must boot up the guest operating system (OS) and restart the applications, which may take a considerable amount of time. To reduce this downtime and enable the VM workload to start running sooner, you can use the save and restore feature to avoid the OS shutdown and boot sequence entirely. This section provides information about saving VMs, as well as about restoring them to the same state without a full VM boot-up. 8.1. How saving and restoring virtual machines works Saving a virtual machine (VM) saves its memory and device state to the host's disk, and immediately stops the VM process. You can save a VM that is either in a running or paused state, and upon restoring, the VM will return to that state. This process frees up RAM and CPU resources on the host system in exchange for disk space, which may improve the host system performance. When the VM is restored, because the guest OS does not need to be booted, the long boot-up period is avoided as well. To save a VM, you can use the command line (CLI). For instructions, see Saving virtual machines by using the command line . To restore a VM you can use the CLI or the web console GUI . 8.2. Saving a virtual machine by using the command line You can save a virtual machine (VM) and its current state to the host's disk. This is useful, for example, when you need to use the host's resources for some other purpose. The saved VM can then be quickly restored to its running state. To save a VM by using the command line, follow the procedure below. Prerequisites Ensure you have sufficient disk space to save the VM and its configuration. Note that the space occupied by the VM depends on the amount of RAM allocated to that VM. Ensure the VM is persistent. Optional: Back up important data from the VM if required. Procedure Use the virsh managedsave utility. For example, the following command stops the demo-guest1 VM and saves its configuration. The saved VM file is located by default in the /var/lib/libvirt/qemu/save directory as demo-guest1.save . The time the VM is started , it will automatically restore the saved state from the above file. Verification List the VMs that have managed save enabled. In the following example, the VMs listed as saved have their managed save enabled. To list the VMs that have a managed save image: Note that to list the saved VMs that are in a shut off state, you must use the --all or --inactive options with the command. Troubleshooting If the saved VM file becomes corrupted or unreadable, restoring the VM will initiate a standard VM boot instead. Additional resources The virsh managedsave --help command Restoring a saved VM by using the command line Restoring a saved VM by using the web console 8.3. Starting a virtual machine by using the command line You can use the command line (CLI) to start a shut-down virtual machine (VM) or restore a saved VM. By using the CLI, you can start both local and remote VMs. Prerequisites An inactive VM that is already defined. The name of the VM. For remote VMs: The IP address of the host where the VM is located. Root access privileges to the host. Procedure For a local VM, use the virsh start utility. For example, the following command starts the demo-guest1 VM. For a VM located on a remote host, use the virsh start utility along with the QEMU+SSH connection to the host. For example, the following command starts the demo-guest1 VM on the 192.0.2.1 host. Additional resources The virsh start --help command Setting up easy access to remote virtualization hosts Starting virtual machines automatically when the host starts 8.4. Starting virtual machines by using the web console If a virtual machine (VM) is in the shut off state, you can start it by using the RHEL 8 web console. You can also configure the VM to be started automatically when the host starts. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The web console VM plug-in is installed on your system . An inactive VM that is already defined. The name of the VM. Procedure In the Virtual Machines interface, click the VM you want to start. A new page opens with detailed information about the selected VM and controls for shutting down and deleting the VM. Click Run . The VM starts, and you can connect to its console or graphical output . Optional: To configure the VM to start automatically when the host starts, toggle the Autostart checkbox in the Overview section. If you use network interfaces that are not managed by libvirt, you must also make additional changes to the systemd configuration. Otherwise, the affected VMs might fail to start, see starting virtual machines automatically when the host starts . Additional resources Shutting down virtual machines in the web console Restarting virtual machines by using the web console
[ "virsh managedsave demo-guest1 Domain 'demo-guest1' saved by libvirt", "virsh list --managed-save --all Id Name State ---------------------------------------------------- - demo-guest1 saved - demo-guest2 shut off", "virsh list --with-managed-save --all Id Name State ---------------------------------------------------- - demo-guest1 shut off", "virsh start demo-guest1 Domain 'demo-guest1' started", "virsh -c qemu+ssh://[email protected]/system start demo-guest1 [email protected]'s password: Domain 'demo-guest1' started" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_virtualization/saving-and-restoring-virtual-machines_configuring-and-managing-virtualization
Chapter 2. AdministrationUsageService
Chapter 2. AdministrationUsageService 2.1. GetCurrentSecuredUnitsUsage GET /v1/administration/usage/secured-units/current GetCurrentSecuredUnitsUsage returns the current secured units usage metrics values. 2.1.1. Description The secured units metrics are collected from all connected clusters every 5 minutes, so the returned result includes data for the connected clusters accurate to about these 5 minutes, and potentially some outdated data for the disconnected clusters. 2.1.2. Parameters 2.1.3. Return Type V1SecuredUnitsUsageResponse 2.1.4. Content Type application/json 2.1.5. Responses Table 2.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1SecuredUnitsUsageResponse 0 An unexpected error response. GooglerpcStatus 2.1.6. Samples 2.1.7. Common object reference 2.1.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 2.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 2.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 2.1.7.3. V1SecuredUnitsUsageResponse SecuredUnitsUsageResponse holds the values of the currently observable administration usage metrics. Field Name Required Nullable Type Description Format numNodes String int64 numCpuUnits String int64 2.2. GetMaxSecuredUnitsUsage GET /v1/administration/usage/secured-units/max GetMaxSecuredUnitsUsage returns the maximum, i.e. peak, secured units usage observed during a given time range, together with the time when this maximum was aggregated and stored. 2.2.1. Description The usage metrics are continuously collected from all the connected clusters. The maximum values are kept for some period of time in memory, and then, periodically, are stored to the database. The last data from disconnected clusters are taken into account. 2.2.2. Parameters 2.2.2.1. Query Parameters Name Description Required Default Pattern from - null to - null 2.2.3. Return Type V1MaxSecuredUnitsUsageResponse 2.2.4. Content Type application/json 2.2.5. Responses Table 2.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1MaxSecuredUnitsUsageResponse 0 An unexpected error response. GooglerpcStatus 2.2.6. Samples 2.2.7. Common object reference 2.2.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 2.2.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 2.2.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 2.2.7.3. V1MaxSecuredUnitsUsageResponse MaxSecuredUnitsUsageResponse holds the maximum values of the secured nodes and CPU Units (as reported by Kubernetes) with the time at which these values were aggregated, with the aggregation period accuracy (1h). Field Name Required Nullable Type Description Format maxNodesAt Date date-time maxNodes String int64 maxCpuUnitsAt Date date-time maxCpuUnits String int64
[ "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/administrationusageservice
Chapter 1. Understanding image builds
Chapter 1. Understanding image builds 1.1. Builds A build is the process of transforming input parameters into a resulting object. Most often, the process is used to transform input parameters or source code into a runnable image. A BuildConfig object is the definition of the entire build process. OpenShift Container Platform uses Kubernetes by creating containers from build images and pushing them to a container image registry. Build objects share common characteristics including inputs for a build, the requirement to complete a build process, logging the build process, publishing resources from successful builds, and publishing the final status of the build. Builds take advantage of resource restrictions, specifying limitations on resources such as CPU usage, memory usage, and build or pod execution time. The OpenShift Container Platform build system provides extensible support for build strategies that are based on selectable types specified in the build API. There are three primary build strategies available: Docker build Source-to-image (S2I) build Custom build By default, docker builds and S2I builds are supported. The resulting object of a build depends on the builder used to create it. For docker and S2I builds, the resulting objects are runnable images. For custom builds, the resulting objects are whatever the builder image author has specified. Additionally, the pipeline build strategy can be used to implement sophisticated workflows: Continuous integration Continuous deployment 1.1.1. Docker build OpenShift Container Platform uses Buildah to build a container image from a Dockerfile. For more information on building container images with Dockerfiles, see the Dockerfile reference documentation . Tip If you set Docker build arguments by using the buildArgs array, see Understand how ARG and FROM interact in the Dockerfile reference documentation. 1.1.2. Source-to-image build Source-to-image (S2I) is a tool for building reproducible container images. It produces ready-to-run images by injecting application source into a container image and assembling a new image. The new image incorporates the base image, the builder, and built source and is ready to use with the buildah run command. S2I supports incremental builds, which re-use previously downloaded dependencies, previously built artifacts, and so on. 1.1.3. Custom build The custom build strategy allows developers to define a specific builder image responsible for the entire build process. Using your own builder image allows you to customize your build process. A custom builder image is a plain container image embedded with build process logic, for example for building RPMs or base images. Custom builds run with a high level of privilege and are not available to users by default. Only users who can be trusted with cluster administration permissions should be granted access to run custom builds. 1.1.4. Pipeline build Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. The Pipeline build strategy allows developers to define a Jenkins pipeline for use by the Jenkins pipeline plugin. The build can be started, monitored, and managed by OpenShift Container Platform in the same way as any other build type. Pipeline workflows are defined in a jenkinsfile , either embedded directly in the build configuration, or supplied in a Git repository and referenced by the build configuration.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/builds_using_buildconfig/understanding-image-builds
Chapter 11. SecretList [image.openshift.io/v1]
Chapter 11. SecretList [image.openshift.io/v1] Description SecretList is a list of Secret. Type object Required items 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Secret) Items is a list of secret objects. More info: https://kubernetes.io/docs/concepts/configuration/secret kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 11.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams/{name}/secrets GET : read secrets of the specified ImageStream 11.2.1. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams/{name}/secrets Table 11.1. Global path parameters Parameter Type Description name string name of the SecretList HTTP method GET Description read secrets of the specified ImageStream Table 11.2. HTTP responses HTTP code Reponse body 200 - OK SecretList schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/image_apis/secretlist-image-openshift-io-v1
Chapter 15. Mail Servers
Chapter 15. Mail Servers Red Hat Enterprise Linux offers many advanced applications to serve and access email. This chapter describes modern email protocols in use today, and some of the programs designed to send and receive email. 15.1. Email Protocols Today, email is delivered using a client/server architecture. An email message is created using a mail client program. This program then sends the message to a server. The server then forwards the message to the recipient's email server, where the message is then supplied to the recipient's email client. To enable this process, a variety of standard network protocols allow different machines, often running different operating systems and using different email programs, to send and receive email. The following protocols discussed are the most commonly used in the transfer of email. 15.1.1. Mail Transport Protocols Mail delivery from a client application to the server, and from an originating server to the destination server, is handled by the Simple Mail Transfer Protocol ( SMTP ). 15.1.1.1. SMTP The primary purpose of SMTP is to transfer email between mail servers. However, it is critical for email clients as well. To send email, the client sends the message to an outgoing mail server, which in turn contacts the destination mail server for delivery. But more intermediate SMTP servers may be included in this chain. This concept is called a mail relaying. For this reason, it is necessary to specify an SMTP server when configuring an email client. Under Red Hat Enterprise Linux, a user can configure an SMTP server on the local machine to handle mail delivery. However, it is also possible to configure remote SMTP servers for outgoing mail. One important point to make about the SMTP protocol is that it does not require authentication. This allows anyone on the Internet to send email to anyone else or even to large groups of people. It is this characteristic of SMTP that makes junk email or spam possible. Imposing relay restrictions limits random users on the Internet from sending email through your SMTP server, to other servers on the internet. Servers that do not impose such restrictions are called open relay servers. Red Hat Enterprise Linux 7 provides the Postfix and Sendmail SMTP programs. 15.1.2. Mail Access Protocols There are two primary protocols used by email client applications to retrieve email from mail servers: the Post Office Protocol ( POP ) and the Internet Message Access Protocol ( IMAP ). 15.1.2.1. POP The default POP server under Red Hat Enterprise Linux is Dovecot and is provided by the dovecot package. Note To install Dovecot run the following command: For more information on installing packages with Yum, see Section 9.2.4, "Installing Packages" . When using a POP server, email messages are downloaded by email client applications. By default, most POP email clients are automatically configured to delete the message on the email server after it has been successfully transferred, however this setting usually can be changed. POP is fully compatible with important Internet messaging standards, such as Multipurpose Internet Mail Extensions ( MIME ), which allow for email attachments. POP works best for users who have one system on which to read email. It also works well for users who do not have a persistent connection to the Internet or the network containing the mail server. Unfortunately for those with slow network connections, POP requires client programs upon authentication to download the entire content of each message. This can take a long time if any messages have large attachments. The most current version of the standard POP protocol is POP3 . There are, however, a variety of lesser-used POP protocol variants: APOP - POP3 with MD5 authentication. An encoded hash of the user's password is sent from the email client to the server rather than sending an unencrypted password. KPOP - POP3 with Kerberos authentication. RPOP - POP3 with RPOP authentication. This uses a per-user ID, similar to a password, to authenticate POP requests. However, this ID is not encrypted, so RPOP is no more secure than standard POP . To improve security, you can use Secure Socket Layer ( SSL ) encryption for client authentication and data transfer sessions. To enable SSL encryption, use: The pop3s service The stunnel application The starttls command For more information on securing email communication, see Section 15.5.1, "Securing Communication" . 15.1.2.2. IMAP The default IMAP server under Red Hat Enterprise Linux is Dovecot and is provided by the dovecot package. See Section 15.1.2.1, "POP" for information on how to install Dovecot . When using an IMAP mail server, email messages remain on the server where users can read or delete them. IMAP also allows client applications to create, rename, or delete mail directories on the server to organize and store email. IMAP is particularly useful for users who access their email using multiple machines. The protocol is also convenient for users connecting to the mail server via a slow connection, because only the email header information is downloaded for messages until opened, saving bandwidth. The user also has the ability to delete messages without viewing or downloading them. For convenience, IMAP client applications are capable of caching copies of messages locally, so the user can browse previously read messages when not directly connected to the IMAP server. IMAP , like POP , is fully compatible with important Internet messaging standards, such as MIME, which allow for email attachments. For added security, it is possible to use SSL encryption for client authentication and data transfer sessions. This can be enabled by using the imaps service, or by using the stunnel program. The pop3s service The stunnel application The starttls command For more information on securing email communication, see Section 15.5.1, "Securing Communication" . Other free, as well as commercial, IMAP clients and servers are available, many of which extend the IMAP protocol and provide additional functionality. 15.1.2.3. Dovecot The imap-login and pop3-login processes which implement the IMAP and POP3 protocols are spawned by the master dovecot daemon included in the dovecot package. The use of IMAP and POP is configured through the /etc/dovecot/dovecot.conf configuration file; by default dovecot runs IMAP and POP3 together with their secure versions using SSL . To configure dovecot to use POP , complete the following steps: Edit the /etc/dovecot/dovecot.conf configuration file to make sure the protocols variable is uncommented (remove the hash sign ( # ) at the beginning of the line) and contains the pop3 argument. For example: When the protocols variable is left commented out, dovecot will use the default values as described above. Make the change operational for the current session by running the following command as root : Make the change operational after the reboot by running the command: Note Please note that dovecot only reports that it started the IMAP server, but also starts the POP3 server. Unlike SMTP , both IMAP and POP3 require connecting clients to authenticate using a user name and password. By default, passwords for both protocols are passed over the network unencrypted. To configure SSL on dovecot : Edit the /etc/dovecot/conf.d/10-ssl.conf configuration to make sure the ssl_protocols variable is uncommented and contains the !SSLv2 !SSLv3 arguments: These values ensure that dovecot avoids SSL versions 2 and also 3, which are both known to be insecure. This is due to the vulnerability described in POODLE: SSLv3 vulnerability (CVE-2014-3566) . See Resolution for POODLE SSL 3.0 vulnerability (CVE-2014-3566) in Postfix and Dovecot for details. Make sure that /etc/dovecot/conf.d/10-ssl.conf contains the following option: Edit the /etc/pki/dovecot/dovecot-openssl.cnf configuration file as you prefer. However, in a typical installation, this file does not require modification. Rename, move or delete the files /etc/pki/dovecot/certs/dovecot.pem and /etc/pki/dovecot/private/dovecot.pem . Execute the /usr/libexec/dovecot/mkcert.sh script which creates the dovecot self signed certificates. These certificates are copied in the /etc/pki/dovecot/certs and /etc/pki/dovecot/private directories. To implement the changes, restart dovecot by issuing the following command as root : More details on dovecot can be found online at http://www.dovecot.org . 15.2. Email Program Classifications In general, all email applications fall into at least one of three classifications. Each classification plays a specific role in the process of moving and managing email messages. While most users are only aware of the specific email program they use to receive and send messages, each one is important for ensuring that email arrives at the correct destination. 15.2.1. Mail Transport Agent A Mail Transport Agent ( MTA ) transports email messages between hosts using SMTP . A message may involve several MTAs as it moves to its intended destination. While the delivery of messages between machines may seem rather straightforward, the entire process of deciding if a particular MTA can or should accept a message for delivery is quite complicated. In addition, due to problems from spam, use of a particular MTA is usually restricted by the MTA's configuration or the access configuration for the network on which the MTA resides. Some email client programs, can act as an MTA when sending an email. However, such email client programs do not have the role of a true MTA, because they can only send outbound messages to an MTA they are authorized to use, but they cannot directly deliver the message to the intended recipient's email server. This functionality is useful if host running the application does not have its own MTA. Since Red Hat Enterprise Linux offers two MTAs, Postfix and Sendmail , email client programs are often not required to act as an MTA. Red Hat Enterprise Linux also includes a special purpose MTA called Fetchmail . For more information on Postfix, Sendmail, and Fetchmail, see Section 15.3, "Mail Transport Agents" . 15.2.2. Mail Delivery Agent A Mail Delivery Agent ( MDA ) is invoked by the MTA to file incoming email in the proper user's mailbox. In many cases, the MDA is actually a Local Delivery Agent ( LDA ), such as mail or Procmail. Any program that actually handles a message for delivery to the point where it can be read by an email client application can be considered an MDA. For this reason, some MTAs (such as Sendmail and Postfix) can fill the role of an MDA when they append new email messages to a local user's mail spool file. In general, MDAs do not transport messages between systems nor do they provide a user interface; MDAs distribute and sort messages on the local machine for an email client application to access. 15.2.3. Mail User Agent A Mail User Agent ( MUA ) is synonymous with an email client application. MUA is a program that, at a minimum, allows a user to read and compose email messages. MUAs can handle these tasks: Retrieving messages via the POP or IMAP protocols Setting up mailboxes to store messages. Sending outbound messages to an MTA. MUAs may be graphical, such as Thunderbird , Evolution , or have simple text-based interfaces, such as mail or Mutt . 15.3. Mail Transport Agents Red Hat Enterprise Linux 7 offers two primary MTAs: Postfix and Sendmail. Postfix is configured as the default MTA and Sendmail is considered deprecated. If required to switch the default MTA to Sendmail, you can either uninstall Postfix or use the following command as root to switch to Sendmail: You can also use the following command to enable the desired service: Similarly, to disable the service, type the following at a shell prompt: For more information on how to manage system services in Red Hat Enterprise Linux 7, see Chapter 10, Managing Services with systemd . 15.3.1. Postfix Originally developed at IBM by security expert and programmer Wietse Venema, Postfix is a Sendmail-compatible MTA that is designed to be secure, fast, and easy to configure. To improve security, Postfix uses a modular design, where small processes with limited privileges are launched by a master daemon. The smaller, less privileged processes perform very specific tasks related to the various stages of mail delivery and run in a changed root environment to limit the effects of attacks. Configuring Postfix to accept network connections from hosts other than the local computer takes only a few minor changes in its configuration file. Yet for those with more complex needs, Postfix provides a variety of configuration options, as well as third party add-ons that make it a very versatile and full-featured MTA. The configuration files for Postfix are human readable and support upward of 250 directives. Unlike Sendmail, no macro processing is required for changes to take effect and the majority of the most commonly used options are described in the heavily commented files. 15.3.1.1. The Default Postfix Installation The Postfix executable is postfix . This daemon launches all related processes needed to handle mail delivery. Postfix stores its configuration files in the /etc/postfix/ directory. The following is a list of the more commonly used files: access - Used for access control, this file specifies which hosts are allowed to connect to Postfix. main.cf - The global Postfix configuration file. The majority of configuration options are specified in this file. master.cf - Specifies how Postfix interacts with various processes to accomplish mail delivery. transport - Maps email addresses to relay hosts. The aliases file can be found in the /etc directory. This file is shared between Postfix and Sendmail. It is a configurable list required by the mail protocol that describes user ID aliases. Important The default /etc/postfix/main.cf file does not allow Postfix to accept network connections from a host other than the local computer. For instructions on configuring Postfix as a server for other clients, see Section 15.3.1.3, "Basic Postfix Configuration" . Restart the postfix service after changing any options in the configuration files under the /etc/postfix/ directory in order for those changes to take effect. To do so, run the following command as root : 15.3.1.2. Upgrading From a Release The following settings in Red Hat Enterprise Linux 7 are different to releases: disable_vrfy_command = no - This is disabled by default, which is different to the default for Sendmail. If changed to yes it can prevent certain email address harvesting methods. allow_percent_hack = yes - This is enabled by default. It allows removing % characters in email addresses. The percent hack is an old workaround that allowed sender-controlled routing of email messages. DNS and mail routing are now much more reliable, but Postfix continues to support the hack. To turn off percent rewriting, set allow_percent_hack to no . smtpd_helo_required = no - This is disabled by default, as it is in Sendmail, because it can prevent some applications from sending mail. It can be changed to yes to require clients to send the HELO or EHLO commands before attempting to send the MAIL, FROM, or ETRN commands. 15.3.1.3. Basic Postfix Configuration By default, Postfix does not accept network connections from any host other than the local host. Perform the following steps as root to enable mail delivery for other hosts on the network: Edit the /etc/postfix/main.cf file with a text editor, such as vi . Uncomment the mydomain line by removing the hash sign ( # ), and replace domain.tld with the domain the mail server is servicing, such as example.com . Uncomment the myorigin = USDmydomain line. Uncomment the myhostname line, and replace host.domain.tld with the host name for the machine. Uncomment the mydestination = USDmyhostname, localhost.USDmydomain line. Uncomment the mynetworks line, and replace 168.100.189.0/28 with a valid network setting for hosts that can connect to the server. Uncomment the inet_interfaces = all line. Comment the inet_interfaces = localhost line. Restart the postfix service. Once these steps are complete, the host accepts outside emails for delivery. Postfix has a large assortment of configuration options. One of the best ways to learn how to configure Postfix is to read the comments within the /etc/postfix/main.cf configuration file. Additional resources including information about Postfix configuration, SpamAssassin integration, or detailed descriptions of the /etc/postfix/main.cf parameters are available online at http://www.postfix.org/ . Important Due to the vulnerability described in POODLE: SSLv3 vulnerability (CVE-2014-3566) , Red Hat recommends disabling SSL and using only TLSv1.1 or TLSv1.2 . See Resolution for POODLE SSL 3.0 vulnerability (CVE-2014-3566) in Postfix and Dovecot for details. 15.3.1.4. Using Postfix with LDAP Postfix can use an LDAP directory as a source for various lookup tables (for example, aliases , virtual , canonical , and so on). This allows LDAP to store hierarchical user information and Postfix to only be given the result of LDAP queries when needed. By not storing this information locally, administrators can easily maintain it. 15.3.1.4.1. The /etc/aliases lookup example The following is a basic example for using LDAP to look up the /etc/aliases file. Make sure your /etc/postfix/main.cf file contains the following: Create a /etc/postfix/ldap-aliases.cf file if you do not have one already and make sure it contains the following: where ldap.example.com , example , and com are parameters that need to be replaced with specification of an existing available LDAP server. Note The /etc/postfix/ldap-aliases.cf file can specify various parameters, including parameters that enable LDAP SSL and STARTTLS . For more information, see the ldap_table(5) man page. For more information on LDAP , see OpenLDAP in the System-Level Authentication Guide . 15.3.2. Sendmail Sendmail's core purpose, like other MTAs, is to safely transfer email between hosts, usually using the SMTP protocol. Note that Sendmail is considered deprecated and administrators are encouraged to use Postfix when possible. See Section 15.3.1, "Postfix" for more information. 15.3.2.1. Purpose and Limitations It is important to be aware of what Sendmail is and what it can do, as opposed to what it is not. In these days of monolithic applications that fulfill multiple roles, Sendmail may seem like the only application needed to run an email server within an organization. Technically, this is true, as Sendmail can spool mail to each users' directory and deliver outbound mail for users. However, most users actually require much more than simple email delivery. Users usually want to interact with their email using an MUA, that uses POP or IMAP , to download their messages to their local machine. Or, they may prefer a Web interface to gain access to their mailbox. These other applications can work in conjunction with Sendmail, but they actually exist for different reasons and can operate separately from one another. It is beyond the scope of this section to go into all that Sendmail should or could be configured to do. With literally hundreds of different options and rule sets, entire volumes have been dedicated to helping explain everything that can be done and how to fix things that go wrong. See the Section 15.7, "Additional Resources" for a list of Sendmail resources. This section reviews the files installed with Sendmail by default and reviews basic configuration changes, including how to stop unwanted email (spam) and how to extend Sendmail with the Lightweight Directory Access Protocol (LDAP) . 15.3.2.2. The Default Sendmail Installation In order to use Sendmail, first ensure the sendmail package is installed on your system by running, as root : In order to configure Sendmail, ensure the sendmail-cf package is installed on your system by running, as root : For more information on installing packages with Yum, see Section 9.2.4, "Installing Packages" . Before using Sendmail, the default MTA has to be switched from Postfix. For more information how to switch the default MTA refer to Section 15.3, "Mail Transport Agents" . The Sendmail executable is sendmail . Sendmail configuration file is located at /etc/mail/sendmail.cf . Avoid editing the sendmail.cf file directly. To make configuration changes to Sendmail, edit the /etc/mail/sendmail.mc file, back up the original /etc/mail/sendmail.cf file, and restart the sendmail service. As a part of the restart, the sendmail.cf file and all binary representations of the databases are rebuild: More information on configuring Sendmail can be found in Section 15.3.2.3, "Common Sendmail Configuration Changes" . Various Sendmail configuration files are installed in the /etc/mail/ directory including: access - Specifies which systems can use Sendmail for outbound email. domaintable - Specifies domain name mapping. local-host-names - Specifies aliases for the host. mailertable - Specifies instructions that override routing for particular domains. virtusertable - Specifies a domain-specific form of aliasing, allowing multiple virtual domains to be hosted on one machine. Several configuration files in the /etc/mail/ directory, such as access , domaintable , mailertable and virtusertable , store their information in database files before Sendmail can use any configuration changes. To include any changes made to these configurations in their database files, run the following command: 15.3.2.3. Common Sendmail Configuration Changes When altering the Sendmail configuration file, it is best not to edit an existing file, but to generate an entirely new /etc/mail/sendmail.cf file. Warning Before replacing or making any changes to the sendmail.cf file, create a backup copy. To add the desired functionality to Sendmail, edit the /etc/mail/sendmail.mc file as root . Once you are finished, restart the sendmail service and, if the m4 package is installed, the m4 macro processor will automatically generate a new sendmail.cf configuration file: Important The default sendmail.cf file does not allow Sendmail to accept network connections from any host other than the local computer. To configure Sendmail as a server for other clients, edit the /etc/mail/sendmail.mc file, and either change the address specified in the Addr= option of the DAEMON_OPTIONS directive from 127.0.0.1 to the IP address of an active network device or comment out the DAEMON_OPTIONS directive all together by placing dnl at the beginning of the line. When finished, regenerate /etc/mail/sendmail.cf by restarting the service: The default configuration in Red Hat Enterprise Linux works for most SMTP -only sites. Consult the /usr/share/sendmail-cf/README file before editing any files in the directories under the /usr/share/sendmail-cf/ directory, as they can affect the future configuration of the /etc/mail/sendmail.cf file. 15.3.2.4. Masquerading One common Sendmail configuration is to have a single machine act as a mail gateway for all machines on the network. For example, a company may want to have a machine called mail.example.com that handles all of their email and assigns a consistent return address to all outgoing mail. In this situation, the Sendmail server must masquerade the machine names on the company network so that their return address is [email protected] instead of [email protected] . To do this, add the following lines to /etc/mail/sendmail.mc : After generating a new sendmail.cf file from the changed configuration in sendmail.mc, restart the sendmail service by a following command: Note that administrators of mail servers, DNS and DHCP servers, as well as any provisioning applications, should agree on the host name format used in an organization. See the Red Hat Enterprise Linux 7 Networking Guide for more information on recommended naming practices. 15.3.2.5. Stopping Spam Email spam can be defined as unnecessary and unwanted email received by a user who never requested the communication. It is a disruptive, costly, and widespread abuse of Internet communication standards. Sendmail makes it relatively easy to block new spamming techniques being employed to send junk email. It even blocks many of the more usual spamming methods by default. Main anti-spam features available in sendmail are header checks , relaying denial (default from version 8.9), access database and sender information checks . For example, forwarding of SMTP messages, also called relaying, has been disabled by default since Sendmail version 8.9. Before this change occurred, Sendmail directed the mail host ( x.edu ) to accept messages from one party ( y.com ) and sent them to a different party ( z.net ). Now, however, Sendmail must be configured to permit any domain to relay mail through the server. To configure relay domains, edit the /etc/mail/relay-domains file and restart Sendmail However, servers on the Internet can also send spam messages. In these instances, Sendmail's access control features available through the /etc/mail/access file can be used to prevent connections from unwanted hosts. The following example illustrates how this file can be used to both block and specifically allow access to the Sendmail server: This example shows that any email sent from badspammer.com is blocked with a 550 RFC-821 compliant error code, with a message sent back. Emails sent from the tux.badspammer.com sub-domain are accepted. The last line shows that any email sent from the 10.0. . network can be relayed through the mail server. Because the /etc/mail/access.db file is a database, use the following command to update any changes: The above examples only represent a small part of what Sendmail can do in terms of allowing or blocking access. See the /usr/share/sendmail-cf/README file for more information and examples. Since Sendmail calls the Procmail MDA when delivering mail, it is also possible to use a spam filtering program, such as SpamAssassin, to identify and file spam for users. See Section 15.4.2.6, "Spam Filters" for more information about using SpamAssassin. 15.3.2.6. Using Sendmail with LDAP Using LDAP is a very quick and powerful way to find specific information about a particular user from a much larger group. For example, an LDAP server can be used to look up a particular email address from a common corporate directory by the user's last name. In this kind of implementation, LDAP is largely separate from Sendmail, with LDAP storing the hierarchical user information and Sendmail only being given the result of LDAP queries in pre-addressed email messages. However, Sendmail supports a much greater integration with LDAP , where it uses LDAP to replace separately maintained files, such as /etc/aliases and /etc/mail/virtusertables , on different mail servers that work together to support a medium- to enterprise-level organization. In short, LDAP abstracts the mail routing level from Sendmail and its separate configuration files to a powerful LDAP cluster that can be leveraged by many different applications. The current version of Sendmail contains support for LDAP . To extend the Sendmail server using LDAP , first get an LDAP server, such as OpenLDAP , running and properly configured. Then edit the /etc/mail/sendmail.mc to include the following: Note This is only for a very basic configuration of Sendmail with LDAP . The configuration can differ greatly from this depending on the implementation of LDAP , especially when configuring several Sendmail machines to use a common LDAP server. Consult /usr/share/sendmail-cf/README for detailed LDAP routing configuration instructions and examples. , recreate the /etc/mail/sendmail.cf file by running the m4 macro processor and again restarting Sendmail. See Section 15.3.2.3, "Common Sendmail Configuration Changes" for instructions. For more information on LDAP , see OpenLDAP in the System-Level Authentication Guide . 15.3.3. Fetchmail Fetchmail is an MTA which retrieves email from remote servers and delivers it to the local MTA. Many users appreciate the ability to separate the process of downloading their messages located on a remote server from the process of reading and organizing their email in an MUA. Designed with the needs of dial-up users in mind, Fetchmail connects and quickly downloads all of the email messages to the mail spool file using any number of protocols, including POP3 and IMAP . It can even forward email messages to an SMTP server, if necessary. Note In order to use Fetchmail , first ensure the fetchmail package is installed on your system by running, as root : For more information on installing packages with Yum, see Section 9.2.4, "Installing Packages" . Fetchmail is configured for each user through the use of a .fetchmailrc file in the user's home directory. If it does not already exist, create the .fetchmailrc file in your home directory Using preferences in the .fetchmailrc file, Fetchmail checks for email on a remote server and downloads it. It then delivers it to port 25 on the local machine, using the local MTA to place the email in the correct user's spool file. If Procmail is available, it is launched to filter the email and place it in a mailbox so that it can be read by an MUA. 15.3.3.1. Fetchmail Configuration Options Although it is possible to pass all necessary options on the command line to check for email on a remote server when executing Fetchmail, using a .fetchmailrc file is much easier. Place any desired configuration options in the .fetchmailrc file for those options to be used each time the fetchmail command is issued. It is possible to override these at the time Fetchmail is run by specifying that option on the command line. A user's .fetchmailrc file contains three classes of configuration options: global options - Gives Fetchmail instructions that control the operation of the program or provide settings for every connection that checks for email. server options - Specifies necessary information about the server being polled, such as the host name, as well as preferences for specific email servers, such as the port to check or number of seconds to wait before timing out. These options affect every user using that server. user options - Contains information, such as user name and password, necessary to authenticate and check for email using a specified email server. Global options appear at the top of the .fetchmailrc file, followed by one or more server options, each of which designate a different email server that Fetchmail should check. User options follow server options for each user account checking that email server. Like server options, multiple user options may be specified for use with a particular server as well as to check multiple email accounts on the same server. Server options are called into service in the .fetchmailrc file by the use of a special option verb, poll or skip , that precedes any of the server information. The poll action tells Fetchmail to use this server option when it is run, which checks for email using the specified user options. Any server options after a skip action, however, are not checked unless this server's host name is specified when Fetchmail is invoked. The skip option is useful when testing configurations in the .fetchmailrc file because it only checks skipped servers when specifically invoked, and does not affect any currently working configurations. The following is an example of a .fetchmailrc file: In this example, the global options specify that the user is sent email as a last resort ( postmaster option) and all email errors are sent to the postmaster instead of the sender ( bouncemail option). The set action tells Fetchmail that this line contains a global option. Then, two email servers are specified, one set to check using POP3 , the other for trying various protocols to find one that works. Two users are checked using the second server option, but all email found for any user is sent to user1 's mail spool. This allows multiple mailboxes to be checked on multiple servers, while appearing in a single MUA inbox. Each user's specific information begins with the user action. Note Users are not required to place their password in the .fetchmailrc file. Omitting the with password ' password ' section causes Fetchmail to ask for a password when it is launched. Fetchmail has numerous global, server, and local options. Many of these options are rarely used or only apply to very specific situations. The fetchmail man page explains each option in detail, but the most common ones are listed in the following three sections. 15.3.3.2. Global Options Each global option should be placed on a single line after a set action. daemon seconds - Specifies daemon-mode, where Fetchmail stays in the background. Replace seconds with the number of seconds Fetchmail is to wait before polling the server. postmaster - Specifies a local user to send mail to in case of delivery problems. syslog - Specifies the log file for errors and status messages. By default, this is /var/log/maillog . 15.3.3.3. Server Options Server options must be placed on their own line in .fetchmailrc after a poll or skip action. auth auth-type - Replace auth-type with the type of authentication to be used. By default, password authentication is used, but some protocols support other types of authentication, including kerberos_v5 , kerberos_v4 , and ssh . If the any authentication type is used, Fetchmail first tries methods that do not require a password, then methods that mask the password, and finally attempts to send the password unencrypted to authenticate to the server. interval number - Polls the specified server every number of times that it checks for email on all configured servers. This option is generally used for email servers where the user rarely receives messages. port port-number - Replace port-number with the port number. This value overrides the default port number for the specified protocol. proto protocol - Replace protocol with the protocol, such as pop3 or imap , to use when checking for messages on the server. timeout seconds - Replace seconds with the number of seconds of server inactivity after which Fetchmail gives up on a connection attempt. If this value is not set, a default of 300 seconds is used. 15.3.3.4. User Options User options may be placed on their own lines beneath a server option or on the same line as the server option. In either case, the defined options must follow the user option (defined below). fetchall - Orders Fetchmail to download all messages in the queue, including messages that have already been viewed. By default, Fetchmail only pulls down new messages. fetchlimit number - Replace number with the number of messages to be retrieved before stopping. flush - Deletes all previously viewed messages in the queue before retrieving new messages. limit max-number-bytes - Replace max-number-bytes with the maximum size in bytes that messages are allowed to be when retrieved by Fetchmail. This option is useful with slow network links, when a large message takes too long to download. password ' password ' - Replace password with the user's password. preconnect " command " - Replace command with a command to be executed before retrieving messages for the user. postconnect " command " - Replace command with a command to be executed after retrieving messages for the user. ssl - Activates SSL encryption. At the time of writing, the default action is to use the best available from SSL2 , SSL3 , SSL23 , TLS1 , TLS1.1 and TLS1.2 . Note that SSL2 is considered obsolete and due to the POODLE: SSLv3 vulnerability (CVE-2014-3566) , SSLv3 should not be used. However there is no way to force the use of TLS1 or newer, therefore ensure the mail server being connected to is configured not to use SSLv2 and SSLv3 . Use stunnel where the server cannot be configured not to use SSLv2 and SSLv3 . sslproto - Defines allowed SSL or TLS protocols. Possible values are SSL2 , SSL3 , SSL23 , and TLS1 . The default value, if sslproto is omitted, unset, or set to an invalid value, is SSL23 . The default action is to use the best from SSLv2 , SSLv3 , TLSv1 , TLS1.1 and TLS1.2 . Note that setting any other value for SSL or TLS will disable all the other protocols. Due to the POODLE: SSLv3 vulnerability (CVE-2014-3566) , it is recommend to omit this option, or set it to SSLv23 , and configure the corresponding mail server not to use SSLv2 and SSLv3 . Use stunnel where the server cannot be configured not to use SSLv2 and SSLv3 . user " username " - Replace username with the user name used by Fetchmail to retrieve messages. This option must precede all other user options. 15.3.3.5. Fetchmail Command Options Most Fetchmail options used on the command line when executing the fetchmail command mirror the .fetchmailrc configuration options. In this way, Fetchmail may be used with or without a configuration file. These options are not used on the command line by most users because it is easier to leave them in the .fetchmailrc file. There may be times when it is desirable to run the fetchmail command with other options for a particular purpose. It is possible to issue command options to temporarily override a .fetchmailrc setting that is causing an error, as any options specified at the command line override configuration file options. 15.3.3.6. Informational or Debugging Options Certain options used after the fetchmail command can supply important information. --configdump - Displays every possible option based on information from .fetchmailrc and Fetchmail defaults. No email is retrieved for any users when using this option. -s - Executes Fetchmail in silent mode, preventing any messages, other than errors, from appearing after the fetchmail command. -v - Executes Fetchmail in verbose mode, displaying every communication between Fetchmail and remote email servers. -V - Displays detailed version information, lists its global options, and shows settings to be used with each user, including the email protocol and authentication method. No email is retrieved for any users when using this option. 15.3.3.7. Special Options These options are occasionally useful for overriding defaults often found in the .fetchmailrc file. -a - Fetchmail downloads all messages from the remote email server, whether new or previously viewed. By default, Fetchmail only downloads new messages. -k - Fetchmail leaves the messages on the remote email server after downloading them. This option overrides the default behavior of deleting messages after downloading them. -l max-number-bytes - Fetchmail does not download any messages over a particular size and leaves them on the remote email server. --quit - Quits the Fetchmail daemon process. More commands and .fetchmailrc options can be found in the fetchmail man page. 15.3.4. Mail Transport Agent (MTA) Configuration A Mail Transport Agent (MTA) is essential for sending email. A Mail User Agent (MUA) such as Evolution or Mutt , is used to read and compose email. When a user sends an email from an MUA, the message is handed off to the MTA, which sends the message through a series of MTAs until it reaches its destination. Even if a user does not plan to send email from the system, some automated tasks or system programs might use the mail command to send email containing log messages to the root user of the local system. Red Hat Enterprise Linux 7 provides two MTAs: Postfix and Sendmail. If both are installed, Postfix is the default MTA. 15.4. Mail Delivery Agents Red Hat Enterprise Linux includes Procmail as primary MDA. Both applications are considered LDAs and both move email from the MTA's spool file into the user's mailbox. However, Procmail provides a robust filtering system. This section details only Procmail. For information on the mail command, consult its man page ( man mail ). Procmail delivers and filters email as it is placed in the mail spool file of the localhost. It is powerful, gentle on system resources, and widely used. Procmail can play a critical role in delivering email to be read by email client applications. Procmail can be invoked in several different ways. Whenever an MTA places an email into the mail spool file, Procmail is launched. Procmail then filters and files the email for the MUA and quits. Alternatively, the MUA can be configured to execute Procmail any time a message is received so that messages are moved into their correct mailboxes. By default, the presence of /etc/procmailrc or of a ~/.procmailrc file (also called an rc file) in the user's home directory invokes Procmail whenever an MTA receives a new message. By default, no system-wide rc files exist in the /etc directory and no .procmailrc files exist in any user's home directory. Therefore, to use Procmail, each user must construct a .procmailrc file with specific environment variables and rules. Whether Procmail acts upon an email message depends upon whether the message matches a specified set of conditions or recipes in the rc file. If a message matches a recipe, then the email is placed in a specified file, is deleted, or is otherwise processed. When Procmail starts, it reads the email message and separates the body from the header information. , Procmail looks for a /etc/procmailrc file and rc files in the /etc/procmailrcs/ directory for default, system-wide, Procmail environmental variables and recipes. Procmail then searches for a .procmailrc file in the user's home directory. Many users also create additional rc files for Procmail that are referred to within the .procmailrc file in their home directory. 15.4.1. Procmail Configuration The Procmail configuration file contains important environmental variables. These variables specify things such as which messages to sort and what to do with the messages that do not match any recipes. These environmental variables usually appear at the beginning of the ~/.procmailrc file in the following format: In this example, env-variable is the name of the variable and value defines the variable. There are many environment variables not used by most Procmail users and many of the more important environment variables are already defined by a default value. Most of the time, the following variables are used: DEFAULT - Sets the default mailbox where messages that do not match any recipes are placed. The default DEFAULT value is the same as USDORGMAIL . INCLUDERC - Specifies additional rc files containing more recipes for messages to be checked against. This breaks up the Procmail recipe lists into individual files that fulfill different roles, such as blocking spam and managing email lists, that can then be turned off or on by using comment characters in the user's ~/.procmailrc file. For example, lines in a user's ~/.procmailrc file may look like this: To turn off Procmail filtering of email lists but leaving spam control in place, comment out the first INCLUDERC line with a hash sign ( # ). Note that it uses paths relative to the current directory. LOCKSLEEP - Sets the amount of time, in seconds, between attempts by Procmail to use a particular lockfile. The default is 8 seconds. LOCKTIMEOUT - Sets the amount of time, in seconds, that must pass after a lockfile was last modified before Procmail assumes that the lockfile is old and can be deleted. The default is 1024 seconds. LOGFILE - The file to which any Procmail information or error messages are written. MAILDIR - Sets the current working directory for Procmail. If set, all other Procmail paths are relative to this directory. ORGMAIL - Specifies the original mailbox, or another place to put the messages if they cannot be placed in the default or recipe-required location. By default, a value of /var/spool/mail/USDLOGNAME is used. SUSPEND - Sets the amount of time, in seconds, that Procmail pauses if a necessary resource, such as swap space, is not available. SWITCHRC - Allows a user to specify an external file containing additional Procmail recipes, much like the INCLUDERC option, except that recipe checking is actually stopped on the referring configuration file and only the recipes on the SWITCHRC -specified file are used. VERBOSE - Causes Procmail to log more information. This option is useful for debugging. Other important environmental variables are pulled from the shell, such as LOGNAME , the login name; HOME , the location of the home directory; and SHELL , the default shell. A comprehensive explanation of all environments variables, and their default values, is available in the procmailrc man page. 15.4.2. Procmail Recipes New users often find the construction of recipes the most difficult part of learning to use Procmail. This difficulty is often attributed to recipes matching messages by using regular expressions which are used to specify qualifications for string matching. However, regular expressions are not very difficult to construct and even less difficult to understand when read. Additionally, the consistency of the way Procmail recipes are written, regardless of regular expressions, makes it easy to learn by example. To see example Procmail recipes, see Section 15.4.2.5, "Recipe Examples" . Procmail recipes take the following form: The first two characters in a Procmail recipe are a colon and a zero. Various flags can be placed after the zero to control how Procmail processes the recipe. A colon after the flags section specifies that a lockfile is created for this message. If a lockfile is created, the name can be specified by replacing lockfile-name . A recipe can contain several conditions to match against the message. If it has no conditions, every message matches the recipe. Regular expressions are placed in some conditions to facilitate message matching. If multiple conditions are used, they must all match for the action to be performed. Conditions are checked based on the flags set in the recipe's first line. Optional special characters placed after the asterisk character ( * ) can further control the condition. The action-to-perform argument specifies the action taken when the message matches one of the conditions. There can only be one action per recipe. In many cases, the name of a mailbox is used here to direct matching messages into that file, effectively sorting the email. Special action characters may also be used before the action is specified. See Section 15.4.2.4, "Special Conditions and Actions" for more information. 15.4.2.1. Delivering vs. Non-Delivering Recipes The action used if the recipe matches a particular message determines whether it is considered a delivering or non-delivering recipe. A delivering recipe contains an action that writes the message to a file, sends the message to another program, or forwards the message to another email address. A non-delivering recipe covers any other actions, such as a nesting block . A nesting block is a set of actions, contained in braces { } , that are performed on messages which match the recipe's conditions. Nesting blocks can be nested inside one another, providing greater control for identifying and performing actions on messages. When messages match a delivering recipe, Procmail performs the specified action and stops comparing the message against any other recipes. Messages that match non-delivering recipes continue to be compared against other recipes. 15.4.2.2. Flags Flags are essential to determine how or if a recipe's conditions are compared to a message. The egrep utility is used internally for matching of the conditions. The following flags are commonly used: A - Specifies that this recipe is only used if the recipe without an A or a flag also matched this message. a - Specifies that this recipe is only used if the recipe with an A or a flag also matched this message and was successfully completed. B - Parses the body of the message and looks for matching conditions. b - Uses the body in any resulting action, such as writing the message to a file or forwarding it. This is the default behavior. c - Generates a carbon copy of the email. This is useful with delivering recipes, since the required action can be performed on the message and a copy of the message can continue being processed in the rc files. D - Makes the egrep comparison case-sensitive. By default, the comparison process is not case-sensitive. E - While similar to the A flag, the conditions in the recipe are only compared to the message if the immediately preceding recipe without an E flag did not match. This is comparable to an else action. e - The recipe is compared to the message only if the action specified in the immediately preceding recipe fails. f - Uses the pipe as a filter. H - Parses the header of the message and looks for matching conditions. This is the default behavior. h - Uses the header in a resulting action. This is the default behavior. w - Tells Procmail to wait for the specified filter or program to finish, and reports whether or not it was successful before considering the message filtered. W - Is identical to w except that "Program failure" messages are suppressed. For a detailed list of additional flags, see the procmailrc man page. 15.4.2.3. Specifying a Local Lockfile Lockfiles are very useful with Procmail to ensure that more than one process does not try to alter a message simultaneously. Specify a local lockfile by placing a colon ( : ) after any flags on a recipe's first line. This creates a local lockfile based on the destination file name plus whatever has been set in the LOCKEXT global environment variable. Alternatively, specify the name of the local lockfile to be used with this recipe after the colon. 15.4.2.4. Special Conditions and Actions Special characters used before Procmail recipe conditions and actions change the way they are interpreted. The following characters may be used after the asterisk character ( * ) at the beginning of a recipe's condition line: ! - In the condition line, this character inverts the condition, causing a match to occur only if the condition does not match the message. < - Checks if the message is under a specified number of bytes. > - Checks if the message is over a specified number of bytes. The following characters are used to perform special actions: ! - In the action line, this character tells Procmail to forward the message to the specified email addresses. USD - Refers to a variable set earlier in the rc file. This is often used to set a common mailbox that is referred to by various recipes. | - Starts a specified program to process the message. { and } - Constructs a nesting block, used to contain additional recipes to apply to matching messages. If no special character is used at the beginning of the action line, Procmail assumes that the action line is specifying the mailbox in which to write the message. 15.4.2.5. Recipe Examples Procmail is an extremely flexible program, but as a result of this flexibility, composing Procmail recipes from scratch can be difficult for new users. The best way to develop the skills to build Procmail recipe conditions stems from a strong understanding of regular expressions combined with looking at many examples built by others. A thorough explanation of regular expressions is beyond the scope of this section. The structure of Procmail recipes and useful sample Procmail recipes can be found at various places on the Internet. The proper use and adaptation of regular expressions can be derived by viewing these recipe examples. In addition, introductory information about basic regular expression rules can be found in the grep(1) man page. The following simple examples demonstrate the basic structure of Procmail recipes and can provide the foundation for more intricate constructions. A basic recipe may not even contain conditions, as is illustrated in the following example: The first line specifies that a local lockfile is to be created but does not specify a name, so Procmail uses the destination file name and appends the value specified in the LOCKEXT environment variable. No condition is specified, so every message matches this recipe and is placed in the single spool file called new-mail.spool , located within the directory specified by the MAILDIR environment variable. An MUA can then view messages in this file. A basic recipe, such as this, can be placed at the end of all rc files to direct messages to a default location. The following example matched messages from a specific email address and throws them away. With this example, any messages sent by [email protected] are sent to the /dev/null device, deleting them. Warning Be certain that rules are working as intended before sending messages to /dev/null for permanent deletion. If a recipe inadvertently catches unintended messages, and those messages disappear, it becomes difficult to troubleshoot the rule. A better solution is to point the recipe's action to a special mailbox, which can be checked from time to time to look for false positives. Once satisfied that no messages are accidentally being matched, delete the mailbox and direct the action to send the messages to /dev/null . The following recipe grabs email sent from a particular mailing list and places it in a specified folder. Any messages sent from the [email protected] mailing list are placed in the tuxlug mailbox automatically for the MUA. Note that the condition in this example matches the message if it has the mailing list's email address on the From , Cc , or To lines. Consult the many Procmail online resources available in Section 15.7, "Additional Resources" for more detailed and powerful recipes. 15.4.2.6. Spam Filters Because it is called by Sendmail, Postfix, and Fetchmail upon receiving new emails, Procmail can be used as a powerful tool for combating spam. This is particularly true when Procmail is used in conjunction with SpamAssassin. When used together, these two applications can quickly identify spam emails, and sort or destroy them. SpamAssassin uses header analysis, text analysis, blacklists, a spam-tracking database, and self-learning Bayesian spam analysis to quickly and accurately identify and tag spam. Note In order to use SpamAssassin , first ensure the spamassassin package is installed on your system by running, as root : For more information on installing packages with Yum, see Section 9.2.4, "Installing Packages" . The easiest way for a local user to use SpamAssassin is to place the following line near the top of the ~/.procmailrc file: The /etc/mail/spamassassin/spamassassin-default.rc contains a simple Procmail rule that activates SpamAssassin for all incoming email. If an email is determined to be spam, it is tagged in the header as such and the title is prepended with the following pattern: The message body of the email is also prepended with a running tally of what elements caused it to be diagnosed as spam. To file email tagged as spam, a rule similar to the following can be used: This rule files all email tagged in the header as spam into a mailbox called spam . Since SpamAssassin is a Perl script, it may be necessary on busy servers to use the binary SpamAssassin daemon ( spamd ) and the client application ( spamc ). Configuring SpamAssassin this way, however, requires root access to the host. To start the spamd daemon, type the following command: To start the SpamAssassin daemon when the system is booted, run: See Chapter 10, Managing Services with systemd for more information about starting and stopping services. To configure Procmail to use the SpamAssassin client application instead of the Perl script, place the following line near the top of the ~/.procmailrc file. For a system-wide configuration, place it in /etc/procmailrc : 15.5. Mail User Agents Red Hat Enterprise Linux offers a variety of email programs, both graphical email client programs, such as Evolution , and text-based email programs such as mutt . The remainder of this section focuses on securing communication between a client and a server. 15.5.1. Securing Communication MUAs included with Red Hat Enterprise Linux, such as Thunderbird , Evolution and Mutt offer SSL-encrypted email sessions. Like any other service that flows over a network unencrypted, important email information, such as user names, passwords, and entire messages, may be intercepted and viewed by users on the network. Additionally, since the standard POP and IMAP protocols pass authentication information unencrypted, it is possible for an attacker to gain access to user accounts by collecting user names and passwords as they are passed over the network. 15.5.1.1. Secure Email Clients Most Linux MUAs designed to check email on remote servers support SSL encryption. To use SSL when retrieving email, it must be enabled on both the email client and the server. SSL is easy to enable on the client-side, often done with the click of a button in the MUA's configuration window or via an option in the MUA's configuration file. Secure IMAP and POP have known port numbers (993 and 995, respectively) that the MUA uses to authenticate and download messages. 15.5.1.2. Securing Email Client Communications Offering SSL encryption to IMAP and POP users on the email server is a simple matter. First, create an SSL certificate. This can be done in two ways: by applying to a Certificate Authority ( CA ) for an SSL certificate or by creating a self-signed certificate. Warning Self-signed certificates should be used for testing purposes only. Any server used in a production environment should use an SSL certificate signed by a CA. To create a self-signed SSL certificate for IMAP or POP , change to the /etc/pki/dovecot/ directory, edit the certificate parameters in the /etc/pki/dovecot/dovecot-openssl.cnf configuration file as you prefer, and type the following commands, as root : Once finished, make sure you have the following configurations in your /etc/dovecot/conf.d/10-ssl.conf file: Issue the following command to restart the dovecot daemon: Alternatively, the stunnel command can be used as an encryption wrapper around the standard, non-secure connections to IMAP or POP services. The stunnel utility uses external OpenSSL libraries included with Red Hat Enterprise Linux to provide strong cryptography and to protect the network connections. It is recommended to apply to a CA to obtain an SSL certificate, but it is also possible to create a self-signed certificate. See Using stunnel in the Red Hat Enterprise Linux 7 Security Guide for instructions on how to install stunnel and create its basic configuration. To configure stunnel as a wrapper for IMAPS and POP3S , add the following lines to the /etc/stunnel/stunnel.conf configuration file: The Security Guide also explains how to start and stop stunnel . Once you start it, it is possible to use an IMAP or a POP email client and connect to the email server using SSL encryption. 15.6. Configuring Mail Server with Antispam and Antivirus Once your email delivery works, incoming emails may contain unsolicited messages also known as spam. These messages can also contain harmful viruses and malware, posing security risk and potential production loss on your systems. To avoid these risks, you can filter the incoming messages and check them against viruses by using an antispam and antivirus solution. 15.6.1. Configuring Spam Filtering for Mail Transport Agent or Mail Delivery Agent You can filter spam in a Mail Transport Agent (MTA), Mail Delivery Agent (MDA), or Mail User Agent (MUA). This chapter describes spam filtering in MTAs and MDAs. 15.6.1.1. Configuring Spam Filtering in a Mail Transport Agent Red Hat Enterprise Linux 7 offers two primary MTAs: Postfix and Sendmail. For details on how to install and configure an MTA, see Section 15.3, "Mail Transport Agents" . Stopping spam in a MTA side is possible with the use of Sendmail, which has several anti-spam features: header checks , relaying denial , access database and sender information checks . Fore more information, see Section 15.3.2.5, "Stopping Spam" . Moreover, both Postfix and Sendmail can work with third-party mail filters (milters) to filter spam and viruses in the mail-processing chain. In case of Postfix, the support for milters is included directly in the postfix package. In case of Sendmail, you need to install the sendmail-milter package, to be able to use milters. 15.6.1.2. Configuring Spam Filtering in a Mail Delivery Agent Red Hat Enterprise Linux includes two primary MDAs, Procmail and the mail utility. See Section 15.2.2, "Mail Delivery Agent" for more information. To stop spam in an MDA, users of Procmail can install third-party software named SpamAssassin available in the spamassassin package. SpamAssassin is a spam detection system that uses a variety of methods to identify spam in incoming mail. For further information on Spamassassin installation, configuration and deployment, see Section 15.4.2.6, "Spam Filters" or the How can I configure Spamassassin to filter all the incoming mail on my server? Red Hat Knowledgebase article. For additional information on SpamAssassin, see the SpamAssassin project website . Warning Note that SpamAssasin is a third-party software, and Red Hat does not support its use. The spamassassin package is available only through the Extra Packages for Enterprise Linux (EPEL) repository. To learn more about using the EPEL repository, see Section 15.6.3, "Using the EPEL Repository to install Antispam and Antivirus Software" . To learn more about how Red Hat handles the third party software and what level of support for it Red Hat provides, see How does Red Hat Global Support Services handle third-party software, drivers, and/or uncertified hardware/hypervisors or guest operating systems? Red Hat Knowledgebase article. 15.6.2. Configuring Antivirus Protection To protect your system against viruses, you can install ClamAV, an open source antivirus engine for detecting trojans, viruses, malware, and other malicious software. For additional information about ClamAV, see the ClamAV project website . Warning Note that ClamAV is a third-party software, and Red Hat does not support its use. The clamav , clamav-data , clamav-server and clamav-update packages are only available in the Extra Packages for Enterprise Linux (EPEL) repository. To learn more about using the EPEL repository, see Section 15.6.3, "Using the EPEL Repository to install Antispam and Antivirus Software" . To learn more about how Red Hat handles the third party software and what level of support for it Red Hat provides, see How does Red Hat Global Support Services handle third-party software, drivers, and/or uncertified hardware/hypervisors or guest operating systems? Red Hat Knowledgebase article. Once you have enabled the EPEL repository, install ClamAV by running the following command as the root user: 15.6.3. Using the EPEL Repository to install Antispam and Antivirus Software EPEL is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Red Hat Enterprise Linux. For more information, see the Fedora EPEL website . To use the EPEL repository, download the latest version of the epel-release package for Red Hat Enterprise Linux 7 . You can also run the following command as the root user: When using the EPEL repository for the first time, you need to authenticate with a public GPG key. For more information, see Fedora Package Signing Keys . 15.7. Additional Resources The following is a list of additional documentation about email applications. 15.7.1. Installed Documentation Information on configuring Sendmail is included with the sendmail and sendmail-cf packages. /usr/share/sendmail-cf/README - Contains information on the m4 macro processor, file locations for Sendmail, supported mailers, how to access enhanced features, and more. In addition, the sendmail and aliases man pages contain helpful information covering various Sendmail options and the proper configuration of the Sendmail /etc/mail/aliases file. /usr/share/doc/postfix- version-number / - Contains a large amount of information on how to configure Postfix. Replace version-number with the version number of Postfix. /usr/share/doc/fetchmail- version-number / - Contains a full list of Fetchmail features in the FEATURES file and an introductory FAQ document. Replace version-number with the version number of Fetchmail. /usr/share/doc/procmail- version-number / - Contains a README file that provides an overview of Procmail, a FEATURES file that explores every program feature, and an FAQ file with answers to many common configuration questions. Replace version-number with the version number of Procmail. When learning how Procmail works and creating new recipes, the following Procmail man pages are invaluable: procmail - Provides an overview of how Procmail works and the steps involved with filtering email. procmailrc - Explains the rc file format used to construct recipes. procmailex - Gives a number of useful, real-world examples of Procmail recipes. procmailsc - Explains the weighted scoring technique used by Procmail to match a particular recipe to a message. /usr/share/doc/spamassassin- version-number / - Contains a large amount of information pertaining to SpamAssassin. Replace version-number with the version number of the spamassassin package. 15.7.2. Online Documentation How to configure postfix with TLS? - A Red Hat Knowledgebase article that describes configuring postfix to use TLS. How to configure a Sendmail Smart Host - A Red Hat Knowledgebase solution that describes configuring a sendmail Smart Host. http://www.sendmail.org/ - Offers a thorough technical breakdown of Sendmail features, documentation and configuration examples. http://www.sendmail.com/ - Contains news, interviews and articles concerning Sendmail, including an expanded view of the many options available. http://www.postfix.org/ - The Postfix project home page contains a wealth of information about Postfix. The mailing list is a particularly good place to look for information. http://www.fetchmail.info/fetchmail-FAQ.html - A thorough FAQ about Fetchmail. http://www.spamassassin.org/ - The official site of the SpamAssassin project. 15.7.3. Related Books Sendmail Milters: A Guide for Fighting Spam by Bryan Costales and Marcia Flynt; Addison-Wesley - A good Sendmail guide that can help you customize your mail filters. Sendmail by Bryan Costales with Eric Allman et al.; O'Reilly & Associates - A good Sendmail reference written with the assistance of the original creator of Delivermail and Sendmail. Removing the Spam: Email Processing and Filtering by Geoff Mulligan; Addison-Wesley Publishing Company - A volume that looks at various methods used by email administrators using established tools, such as Sendmail and Procmail, to manage spam problems. Internet Email Protocols: A Developer's Guide by Kevin Johnson; Addison-Wesley Publishing Company - Provides a very thorough review of major email protocols and the security they provide. Managing IMAP by Dianna Mullet and Kevin Mullet; O'Reilly & Associates - Details the steps required to configure an IMAP server.
[ "~]# yum install dovecot", "protocols = imap pop3 lmtp", "~]# systemctl restart dovecot", "~]# systemctl enable dovecot Created symlink from /etc/systemd/system/multi-user.target.wants/dovecot.service to /usr/lib/systemd/system/dovecot.service.", "ssl_protocols = !SSLv2 !SSLv3", "ssl=required", "~]# systemctl restart dovecot", "~]# alternatives --config mta", "~]# systemctl enable service", "~]# systemctl disable service", "~]# systemctl restart postfix", "alias_maps = hash:/etc/aliases, ldap:/etc/postfix/ldap-aliases.cf", "server_host = ldap.example.com search_base = dc= example , dc= com", "~]# yum install sendmail", "~]# yum install sendmail-cf", "systemctl restart sendmail", "systemctl restart sendmail", "~]# systemctl restart sendmail", "~]# systemctl restart sendmail", "FEATURE(always_add_domain)dnl FEATURE(masquerade_entire_domain)dnl FEATURE(masquerade_envelope)dnl FEATURE(allmasquerade)dnl MASQUERADE_DOMAIN(`example.com.')dnl MASQUERADE_AS(`example.com')dnl", "systemctl restart sendmail", "~]# systemctl restart sendmail", "badspammer.com ERROR:550 \"Go away and do not spam us anymore\" tux.badspammer.com OK 10.0 RELAY", "systemctl restart sendmail", "LDAPROUTE_DOMAIN(' yourdomain.com ')dnl FEATURE('ldap_routing')dnl", "~]# yum install fetchmail", "set postmaster \"user1\" set bouncemail poll pop.domain.com proto pop3 user 'user1' there with password 'secret' is user1 here poll mail.domain2.com user 'user5' there with password 'secret2' is user1 here user 'user7' there with password 'secret3' is user1 here", "env-variable =\" value \"", "MAILDIR=USDHOME/Msgs INCLUDERC=USDMAILDIR/lists.rc INCLUDERC=USDMAILDIR/spam.rc", ":0 flags : lockfile-name * condition_1_special-condition-character condition_1_regular_expression * condition_2_special-condition-character condition-2_regular_expression * condition_N_special-condition-character condition-N_regular_expression special-action-character action-to-perform", ":0: new-mail.spool", ":0 * ^From: [email protected] /dev/null", ":0: * ^(From|Cc|To).*tux-lug tuxlug", "~]# yum install spamassassin", "INCLUDERC=/etc/mail/spamassassin/spamassassin-default.rc", "*****SPAM*****", ":0 Hw * ^X-Spam-Status: Yes spam", "~]# systemctl start spamassassin", "systemctl enable spamassassin.service", "INCLUDERC=/etc/mail/spamassassin/spamassassin-spamc.rc", "dovecot]# rm -f certs/dovecot.pem private/dovecot.pem dovecot]# /usr/libexec/dovecot/mkcert.sh", "ssl_cert = </etc/pki/dovecot/certs/dovecot.pem ssl_key = </etc/pki/dovecot/private/dovecot.pem", "~]# systemctl restart dovecot", "[pop3s] accept = 995 connect = 110 [imaps] accept = 993 connect = 143", "~]# yum install clamav clamav-data clamav-server clamav-update", "~]# yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpmzu" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/ch-Mail_Servers
Preface
Preface Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/tooling_guide/pr01
Red Hat build of OpenTelemetry
Red Hat build of OpenTelemetry OpenShift Container Platform 4.17 Configuring and using the Red Hat build of OpenTelemetry in OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: opentelemetry-operator-controller-manager-metrics-service namespace: openshift-opentelemetry-operator spec: endpoints: - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token path: /metrics port: https scheme: https tlsConfig: insecureSkipVerify: true selector: matchLabels: app.kubernetes.io/name: opentelemetry-operator control-plane: controller-manager --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: otel-operator-prometheus namespace: openshift-opentelemetry-operator annotations: include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" rules: - apiGroups: - \"\" resources: - services - endpoints - pods verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: otel-operator-prometheus namespace: openshift-opentelemetry-operator annotations: include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: otel-operator-prometheus subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring", "spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"", "spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <project_of_opentelemetry_collector_instance> spec: mode: deployment config: receivers: 1 otlp: protocols: grpc: http: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} zipkin: {} processors: 2 batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: 3 debug: {} service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug]", "oc login --username=<your_username>", "oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-opentelemetry-operator openshift.io/cluster-monitoring: \"true\" name: openshift-opentelemetry-operator EOF", "oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-opentelemetry-operator namespace: openshift-opentelemetry-operator spec: upgradeStrategy: Default EOF", "oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: opentelemetry-product namespace: openshift-opentelemetry-operator spec: channel: stable installPlanApproval: Automatic name: opentelemetry-product source: redhat-operators sourceNamespace: openshift-marketplace EOF", "oc get csv -n openshift-opentelemetry-operator", "oc new-project <project_of_opentelemetry_collector_instance>", "oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_opentelemetry_collector_instance> EOF", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <project_of_opentelemetry_collector_instance> spec: mode: deployment config: receivers: 1 otlp: protocols: grpc: http: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} zipkin: {} processors: 2 batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: 3 debug: {} service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug]", "oc apply -f - << EOF <OpenTelemetryCollector_custom_resource> EOF", "oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> -o yaml", "oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name>", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: generate-processors-rbac rules: - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings - clusterroles verbs: - create - delete - get - list - patch - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: generate-processors-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: generate-processors-rbac subjects: - kind: ServiceAccount name: opentelemetry-operator-controller-manager namespace: openshift-opentelemetry-operator", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment observability: metrics: enableMetrics: true config: receivers: otlp: protocols: grpc: {} http: {} processors: {} exporters: otlp: endpoint: otel-collector-headless.tracing-system.svc:4317 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" prometheus: endpoint: 0.0.0.0:8889 resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped service: 1 pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] metrics: receivers: [otlp] processors: [] exporters: [prometheus]", "receivers:", "processors:", "exporters:", "connectors:", "extensions:", "service: pipelines:", "service: pipelines: traces: receivers:", "service: pipelines: traces: processors:", "service: pipelines: traces: exporters:", "service: pipelines: metrics: receivers:", "service: pipelines: metrics: processors:", "service: pipelines: metrics: exporters:", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: generate-processors-rbac rules: - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings - clusterroles verbs: - create - delete - get - list - patch - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: generate-processors-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: generate-processors-rbac subjects: - kind: ServiceAccount name: opentelemetry-operator-controller-manager namespace: openshift-opentelemetry-operator", "config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem client_ca_file: client.pem 3 reload_interval: 1h 4 http: endpoint: 0.0.0.0:4318 5 tls: {} 6 service: pipelines: traces: receivers: [otlp] metrics: receivers: [otlp]", "config: receivers: jaeger: protocols: grpc: endpoint: 0.0.0.0:14250 1 thrift_http: endpoint: 0.0.0.0:14268 2 thrift_compact: endpoint: 0.0.0.0:6831 3 thrift_binary: endpoint: 0.0.0.0:6832 4 tls: {} 5 service: pipelines: traces: receivers: [jaeger]", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-hostfs-daemonset namespace: <namespace> --- apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints allowHostDirVolumePlugin: true allowHostIPC: false allowHostNetwork: false allowHostPID: true allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: null defaultAddCapabilities: - SYS_ADMIN fsGroup: type: RunAsAny groups: [] metadata: name: otel-hostmetrics readOnlyRootFilesystem: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny supplementalGroups: type: RunAsAny users: - system:serviceaccount:<namespace>:otel-hostfs-daemonset volumes: - configMap - emptyDir - hostPath - projected --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <namespace> spec: serviceAccount: otel-hostfs-daemonset mode: daemonset volumeMounts: - mountPath: /hostfs name: host readOnly: true volumes: - hostPath: path: / name: host config: receivers: hostmetrics: collection_interval: 10s 1 initial_delay: 1s 2 root_path: / 3 scrapers: 4 cpu: {} memory: {} disk: {} service: pipelines: metrics: receivers: [hostmetrics]", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-k8sobj namespace: <namespace> --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-k8sobj namespace: <namespace> rules: - apiGroups: - \"\" resources: - events - pods verbs: - get - list - watch - apiGroups: - \"events.k8s.io\" resources: - events verbs: - watch - list --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-k8sobj subjects: - kind: ServiceAccount name: otel-k8sobj namespace: <namespace> roleRef: kind: ClusterRole name: otel-k8sobj apiGroup: rbac.authorization.k8s.io --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel-k8s-obj namespace: <namespace> spec: serviceAccount: otel-k8sobj mode: deployment config: receivers: k8sobjects: auth_type: serviceAccount objects: - name: pods 1 mode: pull 2 interval: 30s 3 label_selector: 4 field_selector: 5 namespaces: [<namespace>,...] 6 - name: events mode: watch exporters: debug: service: pipelines: logs: receivers: [k8sobjects] exporters: [debug]", "config: receivers: kubeletstats: collection_interval: 20s auth_type: \"serviceAccount\" endpoint: \"https://USD{env:K8S_NODE_NAME}:10250\" insecure_skip_verify: true service: pipelines: metrics: receivers: [kubeletstats] env: - name: K8S_NODE_NAME 1 valueFrom: fieldRef: fieldPath: spec.nodeName", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [''] resources: ['nodes/stats'] verbs: ['get', 'watch', 'list'] - apiGroups: [\"\"] resources: [\"nodes/proxy\"] 1 verbs: [\"get\"]", "config: receivers: prometheus: config: scrape_configs: 1 - job_name: 'my-app' 2 scrape_interval: 5s 3 static_configs: - targets: ['my-app.example.svc.cluster.local:8888'] 4 service: pipelines: metrics: receivers: [prometheus]", "config: otlpjsonfile: include: - \"/var/log/*.log\" 1 exclude: - \"/var/log/test.log\" 2", "config: receivers: zipkin: endpoint: 0.0.0.0:9411 1 tls: {} 2 service: pipelines: traces: receivers: [zipkin]", "config: receivers: kafka: brokers: [\"localhost:9092\"] 1 protocol_version: 2.0.0 2 topic: otlp_spans 3 auth: plain_text: 4 username: example password: example tls: 5 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 6 server_name_override: kafka.example.corp 7 service: pipelines: traces: receivers: [kafka]", "config: receivers: k8s_cluster: distribution: openshift collection_interval: 10s exporters: debug: {} service: pipelines: metrics: receivers: [k8s_cluster] exporters: [debug] logs/entity_events: receivers: [k8s_cluster] exporters: [debug]", "apiVersion: v1 kind: ServiceAccount metadata: labels: app: otelcontribcol name: otelcontribcol", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otelcontribcol labels: app: otelcontribcol rules: - apiGroups: - quota.openshift.io resources: - clusterresourcequotas verbs: - get - list - watch - apiGroups: - \"\" resources: - events - namespaces - namespaces/status - nodes - nodes/spec - pods - pods/status - replicationcontrollers - replicationcontrollers/status - resourcequotas - services verbs: - get - list - watch - apiGroups: - apps resources: - daemonsets - deployments - replicasets - statefulsets verbs: - get - list - watch - apiGroups: - extensions resources: - daemonsets - deployments - replicasets verbs: - get - list - watch - apiGroups: - batch resources: - jobs - cronjobs verbs: - get - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - list - watch", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otelcontribcol labels: app: otelcontribcol roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otelcontribcol subjects: - kind: ServiceAccount name: otelcontribcol namespace: default", "config: receivers: opencensus: endpoint: 0.0.0.0:9411 1 tls: 2 cors_allowed_origins: 3 - https://*.<example>.com service: pipelines: traces: receivers: [opencensus]", "config: receivers: filelog: include: [ /simple.log ] 1 operators: 2 - type: regex_parser regex: '^(?P<time>\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}) (?P<sev>[A-Z]*) (?P<msg>.*)USD' timestamp: parse_from: attributes.time layout: '%Y-%m-%d %H:%M:%S' severity: parse_from: attributes.sev", "apiVersion: v1 kind: Namespace metadata: name: otel-journald labels: security.openshift.io/scc.podSecurityLabelSync: \"false\" pod-security.kubernetes.io/enforce: \"privileged\" pod-security.kubernetes.io/audit: \"privileged\" pod-security.kubernetes.io/warn: \"privileged\" --- apiVersion: v1 kind: ServiceAccount metadata: name: privileged-sa namespace: otel-journald --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-journald-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:openshift:scc:privileged subjects: - kind: ServiceAccount name: privileged-sa namespace: otel-journald --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel-journald-logs namespace: otel-journald spec: mode: daemonset serviceAccount: privileged-sa securityContext: allowPrivilegeEscalation: false capabilities: drop: - CHOWN - DAC_OVERRIDE - FOWNER - FSETID - KILL - NET_BIND_SERVICE - SETGID - SETPCAP - SETUID readOnlyRootFilesystem: true seLinuxOptions: type: spc_t seccompProfile: type: RuntimeDefault config: receivers: journald: files: /var/log/journal/*/* priority: info 1 units: 2 - kubelet - crio - init.scope - dnsmasq all: true 3 retry_on_failure: enabled: true 4 initial_interval: 1s 5 max_interval: 30s 6 max_elapsed_time: 5m 7 processors: exporters: debug: {} service: pipelines: logs: receivers: [journald] exporters: [debug] volumeMounts: - name: journal-logs mountPath: /var/log/journal/ readOnly: true volumes: - name: journal-logs hostPath: path: /var/log/journal tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector labels: app: otel-collector rules: - apiGroups: - \"\" resources: - events - namespaces - namespaces/status - nodes - nodes/spec - pods - pods/status - replicationcontrollers - replicationcontrollers/status - resourcequotas - services verbs: - get - list - watch - apiGroups: - apps resources: - daemonsets - deployments - replicasets - statefulsets verbs: - get - list - watch - apiGroups: - extensions resources: - daemonsets - deployments - replicasets verbs: - get - list - watch - apiGroups: - batch resources: - jobs - cronjobs verbs: - get - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - list - watch", "serviceAccount: otel-collector 1 config: receivers: k8s_events: namespaces: [project1, project2] 2 service: pipelines: logs: receivers: [k8s_events]", "config: processors: batch: timeout: 5s send_batch_max_size: 10000 service: pipelines: traces: processors: [batch] metrics: processors: [batch]", "config: processors: memory_limiter: check_interval: 1s limit_mib: 4000 spike_limit_mib: 800 service: pipelines: traces: processors: [batch] metrics: processors: [batch]", "kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"config.openshift.io\"] resources: [\"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]", "config: processors: resourcedetection: detectors: [openshift] override: true service: pipelines: traces: processors: [resourcedetection] metrics: processors: [resourcedetection]", "config: processors: resourcedetection/env: detectors: [env] 1 timeout: 2s override: false", "config: processors: attributes/example: actions: - key: db.table action: delete - key: redacted_span value: true action: upsert - key: copy_key from_attribute: key_original action: update - key: account_id value: 2245 action: insert - key: account_password action: delete - key: account_email action: hash - key: http.status_code action: convert converted_type: int", "config: processors: attributes: - key: cloud.availability_zone value: \"zone-1\" action: upsert - key: k8s.cluster.name from_attribute: k8s-cluster action: insert - key: redundant-attribute action: delete", "config: processors: span: name: from_attributes: [<key1>, <key2>, ...] 1 separator: <value> 2", "config: processors: span/to_attributes: name: to_attributes: rules: - ^\\/api\\/v1\\/document\\/(?P<documentId>.*)\\/updateUSD 1", "config: processors: span/set_status: status: code: Error description: \"<error_description>\"", "kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [''] resources: ['pods', 'namespaces'] verbs: ['get', 'watch', 'list']", "config: processors: k8sattributes: filter: node_from_env_var: KUBE_NODE_NAME", "config: processors: filter/ottl: error_mode: ignore 1 traces: span: - 'attributes[\"container.name\"] == \"app_container_1\"' 2 - 'resource.attributes[\"host.name\"] == \"localhost\"' 3", "config: processors: routing: from_attribute: X-Tenant 1 default_exporters: 2 - jaeger table: 3 - value: acme exporters: [jaeger/acme] exporters: jaeger: endpoint: localhost:14250 jaeger/acme: endpoint: localhost:24250", "config: processors: cumulativetodelta: include: 1 match_type: strict 2 metrics: 3 - <metric_1_name> - <metric_2_name> exclude: 4 match_type: regexp metrics: - \"<regular_expression_for_metric_names>\"", "config: processors: groupbyattrs: keys: 1 - <key1> 2 - <key2>", "config: processors: transform: error_mode: ignore 1 <trace|metric|log>_statements: 2 - context: <string> 3 conditions: 4 - <string> - <string> statements: 5 - <string> - <string> - <string> - context: <string> statements: - <string> - <string> - <string>", "config: transform: error_mode: ignore trace_statements: 1 - context: resource statements: - keep_keys(attributes, [\"service.name\", \"service.namespace\", \"cloud.region\", \"process.command_line\"]) 2 - replace_pattern(attributes[\"process.command_line\"], \"password\\\\=[^\\\\s]*(\\\\s?)\", \"password=***\") 3 - limit(attributes, 100, []) - truncate_all(attributes, 4096) - context: span 4 statements: - set(status.code, 1) where attributes[\"http.path\"] == \"/health\" - set(name, attributes[\"http.route\"]) - replace_match(attributes[\"http.target\"], \"/user/*/list/*\", \"/user/{userId}/list/{listId}\") - limit(attributes, 100, []) - truncate_all(attributes, 4096)", "config: exporters: otlp: endpoint: tempo-ingester:4317 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 3 insecure_skip_verify: false # 4 reload_interval: 1h 5 server_name_override: <name> 6 headers: 7 X-Scope-OrgID: \"dev\" service: pipelines: traces: exporters: [otlp] metrics: exporters: [otlp]", "config: exporters: otlphttp: endpoint: http://tempo-ingester:4318 1 tls: 2 headers: 3 X-Scope-OrgID: \"dev\" disable_keep_alives: false 4 service: pipelines: traces: exporters: [otlphttp] metrics: exporters: [otlphttp]", "config: exporters: debug: verbosity: detailed 1 sampling_initial: 5 2 sampling_thereafter: 200 3 use_internal_logger: true 4 service: pipelines: traces: exporters: [debug] metrics: exporters: [debug]", "config: exporters: loadbalancing: routing_key: \"service\" 1 protocol: otlp: 2 timeout: 1s resolver: 3 static: 4 hostnames: - backend-1:4317 - backend-2:4317 dns: 5 hostname: otelcol-headless.observability.svc.cluster.local k8s: 6 service: lb-svc.kube-public ports: - 15317 - 16317", "config: exporters: prometheus: endpoint: 0.0.0.0:8889 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem namespace: prefix 3 const_labels: 4 label1: value1 enable_open_metrics: true 5 resource_to_telemetry_conversion: 6 enabled: true metric_expiration: 180m 7 add_metric_suffixes: false 8 service: pipelines: metrics: exporters: [prometheus]", "config: exporters: prometheusremotewrite: endpoint: \"https://my-prometheus:7900/api/v1/push\" 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem target_info: true 3 export_created_metric: true 4 max_batch_size_bytes: 3000000 5 service: pipelines: metrics: exporters: [prometheusremotewrite]", "config: exporters: kafka: brokers: [\"localhost:9092\"] 1 protocol_version: 2.0.0 2 topic: otlp_spans 3 auth: plain_text: 4 username: example password: example tls: 5 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 6 server_name_override: kafka.example.corp 7 service: pipelines: traces: exporters: [kafka]", "config: exporters: awscloudwatchlogs: log_group_name: \"<group_name_of_amazon_cloudwatch_logs>\" 1 log_stream_name: \"<log_stream_of_amazon_cloudwatch_logs>\" 2 region: <aws_region_of_log_stream> 3 endpoint: <service_endpoint_of_amazon_cloudwatch_logs> 4 log_retention: <supported_value_in_days> 5", "config: exporters: awsemf: log_group_name: \"<group_name_of_amazon_cloudwatch_logs>\" 1 log_stream_name: \"<log_stream_of_amazon_cloudwatch_logs>\" 2 resource_to_telemetry_conversion: 3 enabled: true region: <region> 4 endpoint: <endpoint> 5 log_retention: <supported_value_in_days> 6 namespace: <custom_namespace> 7", "config: exporters: awsxray: region: \"<region>\" 1 endpoint: <endpoint> 2 resource_arn: \"<aws_resource_arn>\" 3 role_arn: \"<iam_role>\" 4 indexed_attributes: [ \"<indexed_attr_0>\", \"<indexed_attr_1>\" ] 5 aws_log_groups: [\"<group1>\", \"<group2>\"] 6 request_timeout_seconds: 120 7", "config: | exporters: file: path: /data/metrics.json 1 rotation: 2 max_megabytes: 10 3 max_days: 3 4 max_backups: 3 5 localtime: true 6 format: proto 7 compression: zstd 8 flush_interval: 5 9", "config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 exporters: prometheus: endpoint: 0.0.0.0:8889 connectors: count: {} service: pipelines: 1 traces/in: receivers: [otlp] exporters: [count] 2 metrics/out: receivers: [count] 3 exporters: [prometheus]", "config: connectors: count: spans: 1 <custom_metric_name>: 2 description: \"<custom_metric_description>\" conditions: - 'attributes[\"env\"] == \"dev\"' - 'name == \"devevent\"'", "config: connectors: count: logs: 1 <custom_metric_name>: 2 description: \"<custom_metric_description>\" attributes: - key: env default_value: unknown 3", "config: connectors: routing: table: 1 - statement: route() where attributes[\"X-Tenant\"] == \"dev\" 2 pipelines: [traces/dev] 3 - statement: route() where attributes[\"X-Tenant\"] == \"prod\" pipelines: [traces/prod] default_pipelines: [traces/dev] 4 error_mode: ignore 5 match_once: false 6 service: pipelines: traces/in: receivers: [otlp] exporters: [routing] traces/dev: receivers: [routing] exporters: [otlp/dev] traces/prod: receivers: [routing] exporters: [otlp/prod]", "config: receivers: otlp: protocols: grpc: jaeger: protocols: grpc: processors: batch: exporters: otlp: endpoint: tempo-simplest-distributor:4317 tls: insecure: true connectors: forward: {} service: pipelines: traces/regiona: receivers: [otlp] processors: [] exporters: [forward] traces/regionb: receivers: [jaeger] processors: [] exporters: [forward] traces: receivers: [forward] processors: [batch] exporters: [otlp]", "config: connectors: spanmetrics: metrics_flush_interval: 15s 1 service: pipelines: traces: exporters: [spanmetrics] metrics: receivers: [spanmetrics]", "config: extensions: bearertokenauth: scheme: \"Bearer\" 1 token: \"<token>\" 2 filename: \"<token_file>\" 3 receivers: otlp: protocols: http: auth: authenticator: bearertokenauth 4 exporters: otlp: auth: authenticator: bearertokenauth 5 service: extensions: [bearertokenauth] pipelines: traces: receivers: [otlp] exporters: [otlp]", "config: extensions: oauth2client: client_id: <client_id> 1 client_secret: <client_secret> 2 endpoint_params: 3 audience: <audience> token_url: https://example.com/oauth2/default/v1/token 4 scopes: [\"api.metrics\"] 5 # tls settings for the token client tls: 6 insecure: true 7 ca_file: /var/lib/mycert.pem 8 cert_file: <cert_file> 9 key_file: <key_file> 10 timeout: 2s 11 receivers: otlp: protocols: http: {} exporters: otlp: auth: authenticator: oauth2client 12 service: extensions: [oauth2client] pipelines: traces: receivers: [otlp] exporters: [otlp]", "config: extensions: file_storage/all_settings: directory: /var/lib/otelcol/mydir 1 timeout: 1s 2 compaction: on_start: true 3 directory: /tmp/ 4 max_transaction_size: 65_536 5 fsync: false 6 exporters: otlp: sending_queue: storage: file_storage/all_settings 7 service: extensions: [file_storage/all_settings] 8 pipelines: traces: receivers: [otlp] exporters: [otlp]", "config: extensions: oidc: attribute: authorization 1 issuer_url: https://example.com/auth/realms/opentelemetry 2 issuer_ca_path: /var/run/tls/issuer.pem 3 audience: otel-collector 4 username_claim: email 5 receivers: otlp: protocols: grpc: auth: authenticator: oidc exporters: debug: {} service: extensions: [oidc] pipelines: traces: receivers: [otlp] exporters: [debug]", "config: extensions: jaegerremotesampling: source: reload_interval: 30s 1 remote: endpoint: jaeger-collector:14250 2 file: /etc/otelcol/sampling_strategies.json 3 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [jaegerremotesampling] pipelines: traces: receivers: [otlp] exporters: [debug]", "{ \"service_strategies\": [ { \"service\": \"foo\", \"type\": \"probabilistic\", \"param\": 0.8, \"operation_strategies\": [ { \"operation\": \"op1\", \"type\": \"probabilistic\", \"param\": 0.2 }, { \"operation\": \"op2\", \"type\": \"probabilistic\", \"param\": 0.4 } ] }, { \"service\": \"bar\", \"type\": \"ratelimiting\", \"param\": 5 } ], \"default_strategy\": { \"type\": \"probabilistic\", \"param\": 0.5, \"operation_strategies\": [ { \"operation\": \"/health\", \"type\": \"probabilistic\", \"param\": 0.0 }, { \"operation\": \"/metrics\", \"type\": \"probabilistic\", \"param\": 0.0 } ] } }", "config: extensions: pprof: endpoint: localhost:1777 1 block_profile_fraction: 0 2 mutex_profile_fraction: 0 3 save_to_file: test.pprof 4 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [pprof] pipelines: traces: receivers: [otlp] exporters: [debug]", "config: extensions: health_check: endpoint: \"0.0.0.0:13133\" 1 tls: 2 ca_file: \"/path/to/ca.crt\" cert_file: \"/path/to/cert.crt\" key_file: \"/path/to/key.key\" path: \"/health/status\" 3 check_collector_pipeline: 4 enabled: true 5 interval: \"5m\" 6 exporter_failure_threshold: 5 7 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [health_check] pipelines: traces: receivers: [otlp] exporters: [debug]", "config: extensions: zpages: endpoint: \"localhost:55679\" 1 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [zpages] pipelines: traces: receivers: [otlp] exporters: [debug]", "oc port-forward pod/USD(oc get pod -l app.kubernetes.io/name=instance-collector -o=jsonpath='{.items[0].metadata.name}') 55679", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: statefulset 1 targetAllocator: enabled: true 2 serviceAccount: 3 prometheusCR: enabled: true 4 scrapeInterval: 10s serviceMonitorSelector: 5 name: app1 podMonitorSelector: 6 name: app2 config: receivers: prometheus: 7 config: scrape_configs: [] processors: exporters: debug: {} service: pipelines: metrics: receivers: [prometheus] processors: [] exporters: [debug]", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-targetallocator rules: - apiGroups: [\"\"] resources: - services - pods - namespaces verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"monitoring.coreos.com\"] resources: - servicemonitors - podmonitors - scrapeconfigs - probes verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"discovery.k8s.io\"] resources: - endpointslices verbs: [\"get\", \"list\", \"watch\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-targetallocator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otel-targetallocator subjects: - kind: ServiceAccount name: otel-targetallocator 1 namespace: observability 2", "apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: java-instrumentation spec: env: - name: OTEL_EXPORTER_OTLP_TIMEOUT value: \"20\" exporter: endpoint: http://production-collector.observability.svc.cluster.local:4317 propagators: - w3c sampler: type: parentbased_traceidratio argument: \"0.25\" java: env: - name: OTEL_JAVAAGENT_DEBUG value: \"true\"", "apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation spec exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: configMapName: ca-bundle 2 ca_file: service-ca.crt 3", "apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation spec exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: secretName: serving-certs 2 ca_file: service-ca.crt 3 cert_file: tls.crt 4 key_file: tls.key 5", "apiVersion: v1 kind: ConfigMap metadata: name: otelcol-cabundle namespace: tutorial-application annotations: service.beta.openshift.io/inject-cabundle: \"true\" --- apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: my-instrumentation spec: exporter: endpoint: https://simplest-collector.tracing-system.svc.cluster.local:4317 tls: configMapName: otelcol-cabundle ca: service-ca.crt", "instrumentation.opentelemetry.io/inject-apache-httpd: \"true\"", "instrumentation.opentelemetry.io/inject-dotnet: \"true\"", "instrumentation.opentelemetry.io/inject-go: \"true\"", "apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: otel-go-instrumentation-scc allowHostDirVolumePlugin: true allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: - \"SYS_PTRACE\" fsGroup: type: RunAsAny runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - '*' supplementalGroups: type: RunAsAny", "oc adm policy add-scc-to-user otel-go-instrumentation-scc -z <service_account>", "instrumentation.opentelemetry.io/inject-java: \"true\"", "instrumentation.opentelemetry.io/inject-nodejs: \"true\" instrumentation.opentelemetry.io/otel-go-auto-target-exe: \"/path/to/container/executable\"", "instrumentation.opentelemetry.io/inject-python: \"true\"", "instrumentation.opentelemetry.io/container-names: \"<container_1>,<container_2>\"", "instrumentation.opentelemetry.io/<application_language>-container-names: \"<container_1>,<container_2>\" 1", "apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar namespace: observability", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-sidecar namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: serviceAccount: otel-collector-sidecar mode: sidecar config: serviceAccount: otel-collector-sidecar receivers: otlp: protocols: grpc: {} http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: \"tempo-<example>-gateway:8090\" 1 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp]", "apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-<example>-distributor:4317\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector spec: mode: deployment observability: metrics: enableMetrics: true 1 config: exporters: prometheus: endpoint: 0.0.0.0:8889 resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped service: telemetry: metrics: address: \":8888\" pipelines: metrics: exporters: [prometheus]", "apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: otel-collector spec: selector: matchLabels: app.kubernetes.io/name: <cr_name>-collector 1 podMetricsEndpoints: - port: metrics 2 - port: promexporter 3 relabelings: - action: labeldrop regex: pod - action: labeldrop regex: container - action: labeldrop regex: endpoint metricRelabelings: - action: labeldrop regex: instance - action: labeldrop regex: job", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-monitoring-view 1 subjects: - kind: ServiceAccount name: otel-collector namespace: observability --- kind: ConfigMap apiVersion: v1 metadata: name: cabundle namespce: observability annotations: service.beta.openshift.io/inject-cabundle: \"true\" 2 --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: volumeMounts: - name: cabundle-volume mountPath: /etc/pki/ca-trust/source/service-ca readOnly: true volumes: - name: cabundle-volume configMap: name: cabundle mode: deployment config: receivers: prometheus: 3 config: scrape_configs: - job_name: 'federate' scrape_interval: 15s scheme: https tls_config: ca_file: /etc/pki/ca-trust/source/service-ca/service-ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token honor_labels: false params: 'match[]': - '{__name__=\"<metric_name>\"}' 4 metrics_path: '/federate' static_configs: - targets: - \"prometheus-k8s.openshift-monitoring.svc.cluster.local:9091\" exporters: debug: 5 verbosity: detailed service: pipelines: metrics: receivers: [prometheus] processors: [] exporters: [debug]", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: {} otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-simplest-distributor:4317\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] 2 processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]", "apiVersion: batch/v1 kind: Job metadata: name: telemetrygen spec: template: spec: containers: - name: telemetrygen image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:latest args: - traces - --otlp-endpoint=otel-collector:4317 - --otlp-insecure - --duration=30s - --workers=1 restartPolicy: Never backoffLimit: 4", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: openshift-logging", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector-logs-writer rules: - apiGroups: [\"loki.grafana.com\"] resourceNames: [\"logs\"] resources: [\"application\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"pods\", \"namespaces\", \"nodes\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"apps\"] resources: [\"replicasets\"] verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"extensions\"] resources: [\"replicasets\"] verbs: [\"get\", \"list\", \"watch\"]", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector-logs-writer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otel-collector-logs-writer subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: openshift-logging", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: openshift-logging spec: serviceAccount: otel-collector-deployment config: extensions: bearertokenauth: filename: \"/var/run/secrets/kubernetes.io/serviceaccount/token\" receivers: otlp: protocols: grpc: {} http: {} processors: k8sattributes: {} resource: attributes: 1 - key: kubernetes.namespace_name from_attribute: k8s.namespace.name action: upsert - key: kubernetes.pod_name from_attribute: k8s.pod.name action: upsert - key: kubernetes.container_name from_attribute: k8s.container.name action: upsert - key: log_type value: application action: upsert transform: log_statements: - context: log statements: - set(attributes[\"level\"], ConvertCase(severity_text, \"lower\")) exporters: otlphttp: endpoint: https://logging-loki-gateway-http.openshift-logging.svc.cluster.local:8080/api/logs/v1/application/otlp encoding: json tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" auth: authenticator: bearertokenauth debug: verbosity: detailed service: extensions: [bearertokenauth] 2 pipelines: logs: receivers: [otlp] processors: [k8sattributes, transform, resource] exporters: [otlphttp] 3 logs/test: receivers: [otlp] processors: [] exporters: [debug]", "apiVersion: batch/v1 kind: Job metadata: name: telemetrygen spec: template: spec: containers: - name: telemetrygen image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:v0.106.1 args: - logs - --otlp-endpoint=otel-collector.openshift-logging.svc.cluster.local:4317 - --otlp-insecure - --duration=180s - --workers=1 - --logs=10 - --otlp-attributes=k8s.container.name=\"telemetrygen\" restartPolicy: Never backoffLimit: 4", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: <name> spec: observability: metrics: enableMetrics: true", "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-issuer spec: selfSigned: {}", "apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ca spec: isCA: true commonName: ca subject: organizations: - <your_organization_name> organizationalUnits: - Widgets secretName: ca-secret privateKey: algorithm: ECDSA size: 256 issuerRef: name: selfsigned-issuer kind: Issuer group: cert-manager.io", "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: test-ca-issuer spec: ca: secretName: ca-secret", "apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: server spec: secretName: server-tls isCA: false usages: - server auth - client auth dnsNames: - \"otel.observability.svc.cluster.local\" 1 issuerRef: name: ca-issuer --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: client spec: secretName: client-tls isCA: false usages: - server auth - client auth dnsNames: - \"otel.observability.svc.cluster.local\" 2 issuerRef: name: ca-issuer", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-<example> roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: otel-collector-<example> spec: mode: daemonset serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlphttp: endpoint: https://observability-cluster.com:443 1 tls: insecure: false cert_file: /certs/server.crt key_file: /certs/server.key ca_file: /certs/ca.crt service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otlp-receiver namespace: observability spec: mode: \"deployment\" ingress: type: route route: termination: \"passthrough\" config: receivers: otlp: protocols: http: tls: 1 cert_file: /certs/server.crt key_file: /certs/server.key client_ca_file: /certs/ca.crt exporters: otlp: endpoint: \"tempo-<simplest>-distributor:4317\" 2 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs", "oc adm must-gather --image=ghcr.io/open-telemetry/opentelemetry-operator/must-gather -- /usr/bin/must-gather --operator-namespace <operator_namespace> 1", "config: service: telemetry: logs: level: debug 1", "config: service: telemetry: metrics: address: \":8888\" 1", "oc port-forward <collector_pod>", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector spec: mode: deployment observability: metrics: enableMetrics: true", "config: exporters: debug: verbosity: detailed service: pipelines: traces: exporters: [debug] metrics: exporters: [debug] logs: exporters: [debug]", "oc get instrumentation -n <workload_project> 1", "oc get events -n <workload_project> 1", "... Created container opentelemetry-auto-instrumentation ... Started container opentelemetry-auto-instrumentation", "oc logs -l app.kubernetes.io/name=opentelemetry-operator --container manager -n openshift-opentelemetry-operator --follow", "instrumentation.opentelemetry.io/inject-python=\"true\"", "oc get pods -n <workload_project> -o jsonpath='{range .items[?(@.metadata.annotations[\"instrumentation.opentelemetry.io/inject-python\"]==\"true\")]}{.metadata.name}{\"\\n\"}{end}'", "instrumentation.opentelemetry.io/inject-nodejs: \"<instrumentation_object>\"", "instrumentation.opentelemetry.io/inject-nodejs: \"<other_namespace>/<instrumentation_object>\"", "oc get instrumentation <instrumentation_name> -n <workload_project> -o jsonpath='{.spec.endpoint}'", "oc logs <application_pod> -n <workload_project>", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <otel-collector-namespace> spec: mode: sidecar config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: \"tempo-<example>-gateway:8090\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp]", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector-sidecar rules: 1 - apiGroups: [\"config.openshift.io\"] resources: [\"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector-sidecar subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io", "apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-example-gateway:8090\" tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]", "exp, err := jaeger.New(jaeger.WithCollectorEndpoint(jaeger.WithEndpoint(url))) 1", "oc login --username=<your_username>", "oc get deployments -n <project_of_opentelemetry_instance>", "oc delete opentelemetrycollectors <opentelemetry_instance_name> -n <project_of_opentelemetry_instance>", "oc get deployments -n <project_of_opentelemetry_instance>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/red_hat_build_of_opentelemetry/index
Chapter 2. About Red Hat OpenShift GitOps
Chapter 2. About Red Hat OpenShift GitOps Red Hat OpenShift GitOps is an Operator that uses Argo CD as the declarative GitOps engine. It enables GitOps workflows across multicluster OpenShift and Kubernetes infrastructure. Using Red Hat OpenShift GitOps, administrators can consistently configure and deploy Kubernetes-based infrastructure and applications across clusters and development lifecycles. Red Hat OpenShift GitOps is based on the open source project Argo CD and provides a similar set of features to what the upstream offers, with additional automation, integration into Red Hat OpenShift Container Platform and the benefits of Red Hat's enterprise support, quality assurance and focus on enterprise security. Note Because Red Hat OpenShift GitOps releases on a different cadence from OpenShift Container Platform, the Red Hat OpenShift GitOps documentation is now available as separate documentation sets for each minor version of the product. The Red Hat OpenShift GitOps documentation is available at https://docs.openshift.com/gitops/ . Documentation for specific versions is available using the version selector dropdown, or directly by adding the version to the URL, for example, https://docs.openshift.com/gitops/1.8 . In addition, the Red Hat OpenShift GitOps documentation is also available on the Red Hat Portal at https://access.redhat.com/documentation/en-us/red_hat_openshift_gitops/ . For additional information about the Red Hat OpenShift GitOps life cycle and supported platforms, refer to the Platform Life Cycle Policy . Red Hat OpenShift GitOps ensures consistency in applications when you deploy them to different clusters in different environments, such as: development, staging, and production. Red Hat OpenShift GitOps organizes the deployment process around the configuration repositories and makes them the central element. It always has at least two repositories: Application repository with the source code Environment configuration repository that defines the desired state of the application These repositories contain a declarative description of the infrastructure you need in your specified environment. They also contain an automated process to make your environment match the described state. Red Hat OpenShift GitOps uses Argo CD to maintain cluster resources. Argo CD is an open-source declarative tool for the continuous deployment (CD) of applications. Red Hat OpenShift GitOps implements Argo CD as a controller so that it continuously monitors application definitions and configurations defined in a Git repository. Then, Argo CD compares the specified state of these configurations with their live state on the cluster. Argo CD reports any configurations that deviate from their specified state. These reports allow administrators to automatically or manually resync configurations to the defined state. Therefore, Argo CD enables you to deliver global custom resources, like the resources that are used to configure OpenShift Container Platform clusters. 2.1. Key features Red Hat OpenShift GitOps helps you automate the following tasks: Ensure that the clusters have similar states for configuration, monitoring, and storage Apply or revert configuration changes to multiple OpenShift Container Platform clusters Associate templated configuration with different environments Promote applications across clusters, from staging to production 2.2. Additional resources Extending the Kubernetes API with custom resource definitions Managing resources from custom resource definitions What is GitOps?
null
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.12/html/understanding_openshift_gitops/about-redhat-openshift-gitops
Machine management
Machine management OpenShift Container Platform 4.7 Adding and maintaining cluster machines Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/machine_management/index
Chapter 5. Installing operating systems
Chapter 5. Installing operating systems 5.1. Installing hyperconverged hosts The supported operating system for hyperconverged hosts is the latest version of Red Hat Virtualization 4. 5.1.1. Installing a hyperconverged host with Red Hat Virtualization 4 5.1.1.1. Downloading the Red Hat Virtualization 4 operating system Navigate to the Red Hat Customer Portal . Click Downloads to get a list of product downloads. Click Red Hat Virtualization . Click Download latest . In the Product Software tab, click the Download button beside the latest Hypervisor Image, for example, Hypervisor Image for RHV 4.4 . When the file has downloaded, verify its SHA-256 checksum matches the one on the page. Use the downloaded image to create an installation media device. See Creating installation media in the Red Hat Enterprise Linux 8 documentation. 5.1.1.2. Installing the Red Hat Virtualization 4 operating system on hyperconverged hosts Prerequisites Be aware that this operating system is only supported for hyperconverged hosts. Do not install an Network-Bound Disk Encryption (NBDE) key server with this operating system. Be aware of additional server requirements when enabling disk encryption on hyperconverged hosts. See Disk encryption requirements for details. Procedure Start the machine and boot from the prepared installation media. From the boot menu, select Install Red Hat Virtualization 4 and press Enter . Select a language and click Continue . Accept the default Localization options. Click Installation destination . Deselect any disks you do not want to use as installation locations, for example, any disks that will be used for storage domains. Warning Disks with a check mark will be formatted and all their data will be lost. If you are reinstalling this host, ensure that disks with data that you want to retain do not show a check mark. Select the Automatic partitioning option. (Optional) If you want to use disk encryption, select Encrypt my data and specify a password. Warning Remember this password, as your machine will not boot without it. This password is used as the rootpassphrase for this host during Network-Bound Disk Encryption setup. Click Done . Click Network and Host Name . Toggle the Ethernet switch to ON . Select the network interface and click Configure On the General tab, check the Connect automatically with priority checkbox. (Optional) To use IPv6 networking instead of IPv4, specify network details on the IPv6 settings tab. For static network configurations, ensure that you provide the static IPv6 address, prefix, and gateway, as well as IPv6 DNS servers and additional search domains. Important You must use either IPv4 or IPv6; mixed networks are not supported. Click Save . Click Done . (Optional) Configure Security policy. Click Begin installation . Set a root password. Warning Red Hat recommends not creating additional users on hyperconverged hosts, as this can lead to exploitation of local security vulnerabilities. Click Reboot to complete installation. Increase the size of the /var/log partition. You need at least 15 GB of free space for Red Hat Gluster Storage logging requirements. Follow the instructions in Growing a logical volume using the Web Console to increase the size of this partition. 5.2. Installing Network-Bound Disk Encryption key servers If you want to use Network-Bound Disk Encryption to encrypt the contents of your disks in Red Hat Hyperconverged Infrastructure for Virtualization, you need to install at least one key server. The supported operating systems for Network-Bound Disk Encryption (NBDE) key servers are the latest versions of Red Hat Enterprise Linux 7 and 8. 5.2.1. Installing an NBDE key server with Red Hat Enterprise Linux 8 5.2.1.1. Downloading the Red Hat Enterprise Linux 8 operating system Navigate to the Red Hat Customer Portal . Click Downloads to get a list of product downloads. Click Red Hat Enterprise Linux 8 . In the Product Software tab, click Download beside the latest binary DVD image, for example, Red Hat Enterprise Linux 8.2 Binary DVD . When the file has downloaded, verify its SHA-256 checksum matches the one on the page. Use the image to create an installation media device. See Creating installation media in the Red Hat Enterprise Linux 8 documentation for details. 5.2.1.2. Installing the Red Hat Enterprise Linux 8 operating system on Network-Bound Disk Encryption key servers Procedure Start the machine and boot from the prepared installation media. From the boot menu, select Install Red Hat Enterprise Linux 8 and press Enter . Select a language and click Continue . Accept the default Localization and Software options. Click Installation destination . Select the disk that you want to install the operating system on. Warning Disks with a check mark will be formatted and all their data will be lost. If you are reinstalling this host, ensure that disks with data that you want to retain do not show a check mark. (Optional) If you want to use disk encryption, select Encrypt my data and specify a password. Warning Remember this password, as your machine will not boot without it. Click Done . Click Network and Host Name . Toggle the Ethernet switch to ON . Select the network interface and click Configure On the General tab, check the Connect automatically with priority checkbox. (Optional) To use IPv6 networking instead of IPv4, specify network details on the IPv6 settings tab. For static network configurations, ensure that you provide the static IPv6 address, prefix, and gateway, as well as IPv6 DNS servers and additional search domains. Important You must use either IPv4 or IPv6; mixed networks are not supported. Click Save . Click Done . (Optional) Configure Security policy. Click Begin installation . Set a root password. Click Reboot to complete installation. From the Initial Setup window, accept the licensing agreement and register your system. 5.2.2. Installing an NBDE key server with Red Hat Enterprise Linux 7 5.2.2.1. Downloading the Red Hat Enterprise Linux 7 operating system Navigate to the Red Hat Customer Portal . Click Downloads to get a list of product downloads. Click Versions 7 and below . In the Product Software tab, click Download beside the latest binary DVD image, for example, Red Hat Enterprise Linux 7.8 Binary DVD . When the file has downloaded, verify its SHA-256 checksum matches the one on the page. Use the image to create an installation media device. See Creating installation media in the Red Hat Enterprise Linux 8 documentation for details. 5.2.2.2. Installing the Red Hat Enterprise Linux 7 operating system on Network-Bound Disk Encryption key servers Prerequisites Be aware that this operating system is only supported for Network-Bound Disk Encryption (NBDE) key servers. Do not install a hyperconverged host with this operating system. Procedure Start the machine and boot from the prepared installation media. From the boot menu, select Install Red Hat Enterprise Linux 7 and press Enter . Select a language and click Continue . Click Date & Time . Select a time zone. Click Done . Click Keyboard . Select a keyboard layout. Click Done . Click Installation destination . Deselect any disks you do not want to use as an installation location. If you want to use disk encryption, select Encrypt my data and specify a password. Warning Remember this password, as your machine will not boot without it. Click Done . Click Network and Host Name . Click Configure... General . Check the Automatically connect to this network when it is available check box. Click Done . Optionally, configure language support, security policy, and kdump. Click Begin installation . Set a root password. Click Reboot to complete installation. From the Initial Setup window, accept the licensing agreement and register your system.
[ "sha256sum image.iso", "sha256sum image.iso", "sha256sum image.iso" ]
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/installing-operating-systems
Install ROSA with HCP clusters
Install ROSA with HCP clusters Red Hat OpenShift Service on AWS 4 Installing, accessing, and deleting Red Hat OpenShift Service on AWS (ROSA) clusters. Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/install_rosa_with_hcp_clusters/index
Chapter 5. Uninstalling OpenShift Data Foundation
Chapter 5. Uninstalling OpenShift Data Foundation 5.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledge base article on Uninstalling OpenShift Data Foundation .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure/uninstalling_openshift_data_foundation
Metadata APIs
Metadata APIs OpenShift Container Platform 4.14 Reference guide for metadata APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/metadata_apis/index
Chapter 122. AclRuleClusterResource schema reference
Chapter 122. AclRuleClusterResource schema reference Used in: AclRule The type property is a discriminator that distinguishes use of the AclRuleClusterResource type from AclRuleTopicResource , AclRuleGroupResource , AclRuleTransactionalIdResource . It must have the value cluster for the type AclRuleClusterResource . Property Property type Description type string Must be cluster .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-aclruleclusterresource-reference
Chapter 1. Preparing for bare metal cluster installation
Chapter 1. Preparing for bare metal cluster installation 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You have read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Planning a bare metal cluster for OpenShift Virtualization If you will use OpenShift Virtualization, it is important to be aware of several requirements before you install your bare metal cluster. If you want to use live migration features, you must have multiple worker nodes at the time of cluster installation . This is because live migration requires the cluster-level high availability (HA) flag to be set to true. The HA flag is set when a cluster is installed and cannot be changed afterwards. If there are fewer than two worker nodes defined when you install your cluster, the HA flag is set to false for the life of the cluster. Note You can install OpenShift Virtualization on a single-node cluster, but single-node OpenShift does not support high availability. Live migration requires shared storage. Storage for OpenShift Virtualization must support and use the ReadWriteMany (RWX) access mode. If you plan to use Single Root I/O Virtualization (SR-IOV), ensure that your network interface controllers (NICs) are supported by OpenShift Container Platform. Additional resources Getting started with OpenShift Virtualization Preparing your cluster for OpenShift Virtualization About Single Root I/O Virtualization (SR-IOV) hardware networks Connecting a virtual machine to an SR-IOV network 1.3. NIC partitioning for SR-IOV devices OpenShift Container Platform can be deployed on a server with a dual port network interface card (NIC). You can partition a single, high-speed dual port NIC into multiple virtual functions (VFs) and enable SR-IOV. This feature supports the use of bonds for high availability with the Link Aggregation Control Protocol (LACP). Note Only one LACP can be declared by physical NIC. An OpenShift Container Platform cluster can be deployed on a bond interface with 2 VFs on 2 physical functions (PFs) using the following methods: Agent-based installer Note The minimum required version of nmstate is: 1.4.2-4 for RHEL 8 versions 2.2.7 for RHEL 9 versions Installer-provisioned infrastructure installation User-provisioned infrastructure installation Additional resources Example: Bonds and SR-IOV dual-nic node network configuration Optional: Configuring host network interfaces for dual port NIC Bonding multiple SR-IOV network interfaces to a dual port NIC interface 1.4. Choosing a method to install OpenShift Container Platform on bare metal The OpenShift Container Platform installation program offers four methods for deploying a cluster: Interactive : You can deploy a cluster with the web-based Assisted Installer . This is the recommended approach for clusters with networks connected to the internet. The Assisted Installer is the easiest way to install OpenShift Container Platform, it provides smart defaults, and it performs pre-flight validations before installing the cluster. It also provides a RESTful API for automation and advanced configuration scenarios. Local Agent-based : You can deploy a cluster locally with the agent-based installer for air-gapped or restricted networks. It provides many of the benefits of the Assisted Installer, but you must download and configure the agent-based installer first. Configuration is done with a commandline interface. This approach is ideal for air-gapped or restricted networks. Automated : You can deploy a cluster on installer-provisioned infrastructure and the cluster it maintains. The installer uses each cluster host's baseboard management controller (BMC) for provisioning. You can deploy clusters with both connected or air-gapped or restricted networks. Full control : You can deploy a cluster on infrastructure that you prepare and maintain , which provides maximum customizability. You can deploy clusters with both connected or air-gapped or restricted networks. The clusters have the following characteristics: Highly available infrastructure with no single points of failure is available by default. Administrators maintain control over what updates are applied and when. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 1.4.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on bare metal infrastructure that is provisioned by the OpenShift Container Platform installation program, by using the following method: Installing an installer-provisioned cluster on bare metal : You can install OpenShift Container Platform on bare metal by using installer provisioning. 1.4.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on bare metal infrastructure that you provision, by using one of the following methods: Installing a user-provisioned cluster on bare metal : You can install OpenShift Container Platform on bare metal infrastructure that you provision. For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. Installing a user-provisioned bare metal cluster with network customizations : You can install a bare metal cluster on user-provisioned infrastructure with network-customizations. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. Most of the network customizations must be applied at the installation stage. Installing a user-provisioned bare metal cluster on a restricted network : You can install a user-provisioned bare metal cluster on a restricted or disconnected network by using a mirror registry. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_bare_metal/preparing-to-install-on-bare-metal